00:00:00.002 Started by upstream project "autotest-per-patch" build number 121209 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.100 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.185 Fetching changes from the remote Git repository 00:00:00.187 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.257 Using shallow fetch with depth 1 00:00:00.257 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.257 > git --version # timeout=10 00:00:00.302 > git --version # 'git version 2.39.2' 00:00:00.303 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.303 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.303 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.249 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.266 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.280 Checking out Revision 6201031def5bfb7f90a861bc162998684798607e (FETCH_HEAD) 00:00:05.280 > git config core.sparsecheckout # timeout=10 00:00:05.294 > git read-tree -mu HEAD # timeout=10 00:00:05.313 > git checkout -f 6201031def5bfb7f90a861bc162998684798607e # timeout=5 00:00:05.334 Commit message: "scripts/kid: Add issue 3354" 00:00:05.334 > git rev-list --no-walk 6201031def5bfb7f90a861bc162998684798607e # timeout=10 00:00:05.436 [Pipeline] Start of Pipeline 00:00:05.451 [Pipeline] library 00:00:05.453 Loading library shm_lib@master 00:00:05.453 Library shm_lib@master is cached. Copying from home. 00:00:05.470 [Pipeline] node 00:00:05.485 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.486 [Pipeline] { 00:00:05.497 [Pipeline] catchError 00:00:05.498 [Pipeline] { 00:00:05.509 [Pipeline] wrap 00:00:05.517 [Pipeline] { 00:00:05.523 [Pipeline] stage 00:00:05.524 [Pipeline] { (Prologue) 00:00:05.713 [Pipeline] sh 00:00:06.049 + logger -p user.info -t JENKINS-CI 00:00:06.068 [Pipeline] echo 00:00:06.070 Node: WFP22 00:00:06.075 [Pipeline] sh 00:00:06.368 [Pipeline] setCustomBuildProperty 00:00:06.380 [Pipeline] echo 00:00:06.381 Cleanup processes 00:00:06.388 [Pipeline] sh 00:00:06.669 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.669 1792735 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.682 [Pipeline] sh 00:00:06.963 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.963 ++ grep -v 'sudo pgrep' 00:00:06.963 ++ awk '{print $1}' 00:00:06.963 + sudo kill -9 00:00:06.963 + true 00:00:06.976 [Pipeline] cleanWs 00:00:06.984 [WS-CLEANUP] Deleting project workspace... 00:00:06.984 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.992 [WS-CLEANUP] done 00:00:06.995 [Pipeline] setCustomBuildProperty 00:00:07.006 [Pipeline] sh 00:00:07.285 + sudo git config --global --replace-all safe.directory '*' 00:00:07.347 [Pipeline] nodesByLabel 00:00:07.348 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.357 [Pipeline] httpRequest 00:00:07.363 HttpMethod: GET 00:00:07.364 URL: http://10.211.164.96/packages/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:07.390 Sending request to url: http://10.211.164.96/packages/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:07.442 Response Code: HTTP/1.1 200 OK 00:00:07.443 Success: Status code 200 is in the accepted range: 200,404 00:00:07.444 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:28.396 [Pipeline] sh 00:00:28.679 + tar --no-same-owner -xf jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:28.701 [Pipeline] httpRequest 00:00:28.706 HttpMethod: GET 00:00:28.707 URL: http://10.211.164.96/packages/spdk_f8d98be2d666006fde912151d3d5561fe7bf8b7e.tar.gz 00:00:28.708 Sending request to url: http://10.211.164.96/packages/spdk_f8d98be2d666006fde912151d3d5561fe7bf8b7e.tar.gz 00:00:28.712 Response Code: HTTP/1.1 200 OK 00:00:28.712 Success: Status code 200 is in the accepted range: 200,404 00:00:28.713 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f8d98be2d666006fde912151d3d5561fe7bf8b7e.tar.gz 00:04:16.275 [Pipeline] sh 00:04:16.562 + tar --no-same-owner -xf spdk_f8d98be2d666006fde912151d3d5561fe7bf8b7e.tar.gz 00:04:19.108 [Pipeline] sh 00:04:19.395 + git -C spdk log --oneline -n5 00:04:19.395 f8d98be2d nvmf: remove cb_fn/cb_arg from spdk_nvmf_qpair_disconnect() 00:04:19.395 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:04:19.395 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:04:19.395 c11e5c113 bdev: introduce bdev_nvme_cdw12 and cdw13, and add them to ext_opts 00:04:19.395 037d51655 nvmf: fdp capability to the subsystem 00:04:19.408 [Pipeline] } 00:04:19.426 [Pipeline] // stage 00:04:19.435 [Pipeline] stage 00:04:19.437 [Pipeline] { (Prepare) 00:04:19.457 [Pipeline] writeFile 00:04:19.476 [Pipeline] sh 00:04:19.759 + logger -p user.info -t JENKINS-CI 00:04:19.772 [Pipeline] sh 00:04:20.054 + logger -p user.info -t JENKINS-CI 00:04:20.067 [Pipeline] sh 00:04:20.348 + cat autorun-spdk.conf 00:04:20.348 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:20.348 SPDK_TEST_NVMF=1 00:04:20.348 SPDK_TEST_NVME_CLI=1 00:04:20.348 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:20.348 SPDK_TEST_NVMF_NICS=e810 00:04:20.348 SPDK_TEST_VFIOUSER=1 00:04:20.348 SPDK_RUN_UBSAN=1 00:04:20.348 NET_TYPE=phy 00:04:20.356 RUN_NIGHTLY=0 00:04:20.361 [Pipeline] readFile 00:04:20.386 [Pipeline] withEnv 00:04:20.388 [Pipeline] { 00:04:20.403 [Pipeline] sh 00:04:20.689 + set -ex 00:04:20.690 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:20.690 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:20.690 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:20.690 ++ SPDK_TEST_NVMF=1 00:04:20.690 ++ SPDK_TEST_NVME_CLI=1 00:04:20.690 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:20.690 ++ SPDK_TEST_NVMF_NICS=e810 00:04:20.690 ++ SPDK_TEST_VFIOUSER=1 00:04:20.690 ++ SPDK_RUN_UBSAN=1 00:04:20.690 ++ NET_TYPE=phy 00:04:20.690 ++ RUN_NIGHTLY=0 00:04:20.690 + case $SPDK_TEST_NVMF_NICS in 00:04:20.690 + DRIVERS=ice 00:04:20.690 + [[ tcp == \r\d\m\a ]] 00:04:20.690 + [[ -n ice ]] 00:04:20.690 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:20.690 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:20.690 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:20.690 rmmod: ERROR: Module irdma is not currently loaded 00:04:20.690 rmmod: ERROR: Module i40iw is not currently loaded 00:04:20.690 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:20.690 + true 00:04:20.690 + for D in $DRIVERS 00:04:20.690 + sudo modprobe ice 00:04:20.690 + exit 0 00:04:20.700 [Pipeline] } 00:04:20.725 [Pipeline] // withEnv 00:04:20.731 [Pipeline] } 00:04:20.750 [Pipeline] // stage 00:04:20.759 [Pipeline] catchError 00:04:20.761 [Pipeline] { 00:04:20.776 [Pipeline] timeout 00:04:20.776 Timeout set to expire in 40 min 00:04:20.778 [Pipeline] { 00:04:20.794 [Pipeline] stage 00:04:20.796 [Pipeline] { (Tests) 00:04:20.814 [Pipeline] sh 00:04:21.096 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:21.096 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:21.096 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:21.096 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:21.096 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:21.096 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:21.096 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:21.096 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:21.096 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:21.096 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:21.096 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:21.096 + source /etc/os-release 00:04:21.096 ++ NAME='Fedora Linux' 00:04:21.096 ++ VERSION='38 (Cloud Edition)' 00:04:21.096 ++ ID=fedora 00:04:21.096 ++ VERSION_ID=38 00:04:21.096 ++ VERSION_CODENAME= 00:04:21.096 ++ PLATFORM_ID=platform:f38 00:04:21.096 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:04:21.096 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:21.096 ++ LOGO=fedora-logo-icon 00:04:21.096 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:04:21.096 ++ HOME_URL=https://fedoraproject.org/ 00:04:21.096 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:04:21.096 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:21.096 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:21.096 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:21.096 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:04:21.096 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:21.096 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:04:21.096 ++ SUPPORT_END=2024-05-14 00:04:21.096 ++ VARIANT='Cloud Edition' 00:04:21.096 ++ VARIANT_ID=cloud 00:04:21.096 + uname -a 00:04:21.096 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:04:21.096 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:24.384 Hugepages 00:04:24.384 node hugesize free / total 00:04:24.384 node0 1048576kB 0 / 0 00:04:24.384 node0 2048kB 0 / 0 00:04:24.384 node1 1048576kB 0 / 0 00:04:24.384 node1 2048kB 0 / 0 00:04:24.384 00:04:24.384 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:24.384 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:24.384 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:24.384 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:24.384 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:24.384 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:24.384 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:24.384 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:24.384 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:24.384 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:24.384 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:24.384 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:24.384 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:24.384 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:24.384 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:24.384 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:24.384 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:24.384 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:24.384 + rm -f /tmp/spdk-ld-path 00:04:24.384 + source autorun-spdk.conf 00:04:24.384 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:24.384 ++ SPDK_TEST_NVMF=1 00:04:24.384 ++ SPDK_TEST_NVME_CLI=1 00:04:24.384 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:24.384 ++ SPDK_TEST_NVMF_NICS=e810 00:04:24.384 ++ SPDK_TEST_VFIOUSER=1 00:04:24.384 ++ SPDK_RUN_UBSAN=1 00:04:24.384 ++ NET_TYPE=phy 00:04:24.384 ++ RUN_NIGHTLY=0 00:04:24.384 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:24.384 + [[ -n '' ]] 00:04:24.384 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:24.384 + for M in /var/spdk/build-*-manifest.txt 00:04:24.384 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:24.384 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:24.384 + for M in /var/spdk/build-*-manifest.txt 00:04:24.384 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:24.384 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:24.384 ++ uname 00:04:24.384 + [[ Linux == \L\i\n\u\x ]] 00:04:24.384 + sudo dmesg -T 00:04:24.384 + sudo dmesg --clear 00:04:24.384 + dmesg_pid=1794207 00:04:24.384 + [[ Fedora Linux == FreeBSD ]] 00:04:24.384 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:24.385 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:24.385 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:24.385 + [[ -x /usr/src/fio-static/fio ]] 00:04:24.385 + export FIO_BIN=/usr/src/fio-static/fio 00:04:24.385 + FIO_BIN=/usr/src/fio-static/fio 00:04:24.385 + sudo dmesg -Tw 00:04:24.385 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:24.385 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:24.385 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:24.385 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:24.385 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:24.385 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:24.385 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:24.385 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:24.385 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:24.385 Test configuration: 00:04:24.385 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:24.385 SPDK_TEST_NVMF=1 00:04:24.385 SPDK_TEST_NVME_CLI=1 00:04:24.385 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:24.385 SPDK_TEST_NVMF_NICS=e810 00:04:24.385 SPDK_TEST_VFIOUSER=1 00:04:24.385 SPDK_RUN_UBSAN=1 00:04:24.385 NET_TYPE=phy 00:04:24.385 RUN_NIGHTLY=0 08:38:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:24.385 08:38:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:24.385 08:38:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.385 08:38:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.385 08:38:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.385 08:38:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.385 08:38:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.385 08:38:41 -- paths/export.sh@5 -- $ export PATH 00:04:24.385 08:38:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.385 08:38:41 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:24.385 08:38:41 -- common/autobuild_common.sh@435 -- $ date +%s 00:04:24.385 08:38:41 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714113521.XXXXXX 00:04:24.385 08:38:41 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714113521.FJug1I 00:04:24.385 08:38:41 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:04:24.385 08:38:41 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:04:24.385 08:38:41 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:24.385 08:38:41 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:24.385 08:38:41 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:24.385 08:38:41 -- common/autobuild_common.sh@451 -- $ get_config_params 00:04:24.385 08:38:41 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:04:24.385 08:38:41 -- common/autotest_common.sh@10 -- $ set +x 00:04:24.385 08:38:41 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:24.385 08:38:41 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:04:24.385 08:38:41 -- pm/common@17 -- $ local monitor 00:04:24.385 08:38:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.385 08:38:41 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1794241 00:04:24.385 08:38:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.385 08:38:41 -- pm/common@21 -- $ date +%s 00:04:24.385 08:38:41 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1794243 00:04:24.385 08:38:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.385 08:38:41 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1794246 00:04:24.385 08:38:41 -- pm/common@21 -- $ date +%s 00:04:24.385 08:38:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:24.385 08:38:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714113521 00:04:24.385 08:38:41 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1794248 00:04:24.385 08:38:41 -- pm/common@21 -- $ date +%s 00:04:24.385 08:38:41 -- pm/common@26 -- $ sleep 1 00:04:24.385 08:38:41 -- pm/common@21 -- $ date +%s 00:04:24.385 08:38:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714113521 00:04:24.385 08:38:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714113521 00:04:24.385 08:38:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714113521 00:04:24.385 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714113521_collect-bmc-pm.bmc.pm.log 00:04:24.385 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714113521_collect-cpu-load.pm.log 00:04:24.385 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714113521_collect-vmstat.pm.log 00:04:24.385 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714113521_collect-cpu-temp.pm.log 00:04:25.323 08:38:42 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:04:25.323 08:38:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:25.323 08:38:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:25.323 08:38:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.323 08:38:42 -- spdk/autobuild.sh@16 -- $ date -u 00:04:25.323 Fri Apr 26 06:38:42 AM UTC 2024 00:04:25.323 08:38:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:25.323 v24.05-pre-447-gf8d98be2d 00:04:25.323 08:38:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:25.323 08:38:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:25.323 08:38:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:25.323 08:38:42 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:04:25.323 08:38:42 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:04:25.323 08:38:42 -- common/autotest_common.sh@10 -- $ set +x 00:04:25.582 ************************************ 00:04:25.582 START TEST ubsan 00:04:25.582 ************************************ 00:04:25.582 08:38:42 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:04:25.582 using ubsan 00:04:25.582 00:04:25.582 real 0m0.000s 00:04:25.582 user 0m0.000s 00:04:25.582 sys 0m0.000s 00:04:25.582 08:38:42 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:04:25.582 08:38:42 -- common/autotest_common.sh@10 -- $ set +x 00:04:25.582 ************************************ 00:04:25.582 END TEST ubsan 00:04:25.582 ************************************ 00:04:25.582 08:38:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:25.582 08:38:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:25.582 08:38:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:25.582 08:38:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:25.582 08:38:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:25.582 08:38:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:25.582 08:38:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:25.582 08:38:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:25.582 08:38:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:04:25.851 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:25.851 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:26.143 Using 'verbs' RDMA provider 00:04:41.981 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:54.240 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:54.240 Creating mk/config.mk...done. 00:04:54.240 Creating mk/cc.flags.mk...done. 00:04:54.240 Type 'make' to build. 00:04:54.240 08:39:10 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:04:54.240 08:39:10 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:04:54.240 08:39:10 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:04:54.240 08:39:10 -- common/autotest_common.sh@10 -- $ set +x 00:04:54.240 ************************************ 00:04:54.240 START TEST make 00:04:54.240 ************************************ 00:04:54.240 08:39:10 -- common/autotest_common.sh@1111 -- $ make -j112 00:04:54.240 make[1]: Nothing to be done for 'all'. 00:04:55.621 The Meson build system 00:04:55.621 Version: 1.3.1 00:04:55.621 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:55.621 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:55.621 Build type: native build 00:04:55.621 Project name: libvfio-user 00:04:55.621 Project version: 0.0.1 00:04:55.621 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:55.621 C linker for the host machine: cc ld.bfd 2.39-16 00:04:55.621 Host machine cpu family: x86_64 00:04:55.621 Host machine cpu: x86_64 00:04:55.621 Run-time dependency threads found: YES 00:04:55.621 Library dl found: YES 00:04:55.621 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:55.621 Run-time dependency json-c found: YES 0.17 00:04:55.621 Run-time dependency cmocka found: YES 1.1.7 00:04:55.621 Program pytest-3 found: NO 00:04:55.621 Program flake8 found: NO 00:04:55.621 Program misspell-fixer found: NO 00:04:55.621 Program restructuredtext-lint found: NO 00:04:55.621 Program valgrind found: YES (/usr/bin/valgrind) 00:04:55.621 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:55.621 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:55.621 Compiler for C supports arguments -Wwrite-strings: YES 00:04:55.621 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:55.621 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:55.621 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:55.621 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:55.621 Build targets in project: 8 00:04:55.621 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:55.621 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:55.621 00:04:55.621 libvfio-user 0.0.1 00:04:55.621 00:04:55.621 User defined options 00:04:55.621 buildtype : debug 00:04:55.621 default_library: shared 00:04:55.621 libdir : /usr/local/lib 00:04:55.621 00:04:55.621 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:55.879 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:55.879 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:55.879 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:55.879 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:55.879 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:55.879 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:55.879 [6/37] Compiling C object samples/null.p/null.c.o 00:04:55.879 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:55.879 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:55.879 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:55.879 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:55.879 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:55.879 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:55.879 [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:55.879 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:55.879 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:55.879 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:55.879 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:55.879 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:56.136 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:56.136 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:56.136 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:56.136 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:56.136 [23/37] Compiling C object samples/server.p/server.c.o 00:04:56.136 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:56.136 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:56.136 [26/37] Compiling C object samples/client.p/client.c.o 00:04:56.136 [27/37] Linking target samples/client 00:04:56.136 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:56.136 [29/37] Linking target test/unit_tests 00:04:56.136 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:56.136 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:04:56.395 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:56.395 [33/37] Linking target samples/gpio-pci-idio-16 00:04:56.395 [34/37] Linking target samples/server 00:04:56.395 [35/37] Linking target samples/shadow_ioeventfd_server 00:04:56.395 [36/37] Linking target samples/null 00:04:56.395 [37/37] Linking target samples/lspci 00:04:56.395 INFO: autodetecting backend as ninja 00:04:56.395 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:56.395 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:56.653 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:56.653 ninja: no work to do. 00:05:01.918 The Meson build system 00:05:01.918 Version: 1.3.1 00:05:01.918 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:01.918 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:01.918 Build type: native build 00:05:01.918 Program cat found: YES (/usr/bin/cat) 00:05:01.918 Project name: DPDK 00:05:01.918 Project version: 23.11.0 00:05:01.918 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:05:01.918 C linker for the host machine: cc ld.bfd 2.39-16 00:05:01.918 Host machine cpu family: x86_64 00:05:01.918 Host machine cpu: x86_64 00:05:01.918 Message: ## Building in Developer Mode ## 00:05:01.918 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:01.918 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:01.918 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:01.918 Program python3 found: YES (/usr/bin/python3) 00:05:01.918 Program cat found: YES (/usr/bin/cat) 00:05:01.918 Compiler for C supports arguments -march=native: YES 00:05:01.918 Checking for size of "void *" : 8 00:05:01.918 Checking for size of "void *" : 8 (cached) 00:05:01.918 Library m found: YES 00:05:01.918 Library numa found: YES 00:05:01.918 Has header "numaif.h" : YES 00:05:01.918 Library fdt found: NO 00:05:01.918 Library execinfo found: NO 00:05:01.918 Has header "execinfo.h" : YES 00:05:01.918 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:05:01.918 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:01.918 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:01.918 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:01.918 Run-time dependency openssl found: YES 3.0.9 00:05:01.918 Run-time dependency libpcap found: YES 1.10.4 00:05:01.918 Has header "pcap.h" with dependency libpcap: YES 00:05:01.918 Compiler for C supports arguments -Wcast-qual: YES 00:05:01.918 Compiler for C supports arguments -Wdeprecated: YES 00:05:01.918 Compiler for C supports arguments -Wformat: YES 00:05:01.918 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:01.918 Compiler for C supports arguments -Wformat-security: NO 00:05:01.918 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:01.918 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:01.918 Compiler for C supports arguments -Wnested-externs: YES 00:05:01.918 Compiler for C supports arguments -Wold-style-definition: YES 00:05:01.918 Compiler for C supports arguments -Wpointer-arith: YES 00:05:01.918 Compiler for C supports arguments -Wsign-compare: YES 00:05:01.918 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:01.918 Compiler for C supports arguments -Wundef: YES 00:05:01.918 Compiler for C supports arguments -Wwrite-strings: YES 00:05:01.918 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:01.918 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:01.918 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:01.918 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:01.918 Program objdump found: YES (/usr/bin/objdump) 00:05:01.918 Compiler for C supports arguments -mavx512f: YES 00:05:01.918 Checking if "AVX512 checking" compiles: YES 00:05:01.918 Fetching value of define "__SSE4_2__" : 1 00:05:01.918 Fetching value of define "__AES__" : 1 00:05:01.918 Fetching value of define "__AVX__" : 1 00:05:01.918 Fetching value of define "__AVX2__" : 1 00:05:01.918 Fetching value of define "__AVX512BW__" : 1 00:05:01.918 Fetching value of define "__AVX512CD__" : 1 00:05:01.918 Fetching value of define "__AVX512DQ__" : 1 00:05:01.918 Fetching value of define "__AVX512F__" : 1 00:05:01.918 Fetching value of define "__AVX512VL__" : 1 00:05:01.918 Fetching value of define "__PCLMUL__" : 1 00:05:01.918 Fetching value of define "__RDRND__" : 1 00:05:01.918 Fetching value of define "__RDSEED__" : 1 00:05:01.918 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:01.918 Fetching value of define "__znver1__" : (undefined) 00:05:01.918 Fetching value of define "__znver2__" : (undefined) 00:05:01.918 Fetching value of define "__znver3__" : (undefined) 00:05:01.918 Fetching value of define "__znver4__" : (undefined) 00:05:01.918 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:01.918 Message: lib/log: Defining dependency "log" 00:05:01.918 Message: lib/kvargs: Defining dependency "kvargs" 00:05:01.918 Message: lib/telemetry: Defining dependency "telemetry" 00:05:01.918 Checking for function "getentropy" : NO 00:05:01.918 Message: lib/eal: Defining dependency "eal" 00:05:01.918 Message: lib/ring: Defining dependency "ring" 00:05:01.918 Message: lib/rcu: Defining dependency "rcu" 00:05:01.918 Message: lib/mempool: Defining dependency "mempool" 00:05:01.918 Message: lib/mbuf: Defining dependency "mbuf" 00:05:01.918 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:01.918 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:01.918 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:01.918 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:01.918 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:01.918 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:05:01.918 Compiler for C supports arguments -mpclmul: YES 00:05:01.918 Compiler for C supports arguments -maes: YES 00:05:01.918 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:01.918 Compiler for C supports arguments -mavx512bw: YES 00:05:01.918 Compiler for C supports arguments -mavx512dq: YES 00:05:01.918 Compiler for C supports arguments -mavx512vl: YES 00:05:01.918 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:01.918 Compiler for C supports arguments -mavx2: YES 00:05:01.918 Compiler for C supports arguments -mavx: YES 00:05:01.918 Message: lib/net: Defining dependency "net" 00:05:01.918 Message: lib/meter: Defining dependency "meter" 00:05:01.918 Message: lib/ethdev: Defining dependency "ethdev" 00:05:01.918 Message: lib/pci: Defining dependency "pci" 00:05:01.918 Message: lib/cmdline: Defining dependency "cmdline" 00:05:01.918 Message: lib/hash: Defining dependency "hash" 00:05:01.918 Message: lib/timer: Defining dependency "timer" 00:05:01.918 Message: lib/compressdev: Defining dependency "compressdev" 00:05:01.918 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:01.918 Message: lib/dmadev: Defining dependency "dmadev" 00:05:01.918 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:01.918 Message: lib/power: Defining dependency "power" 00:05:01.918 Message: lib/reorder: Defining dependency "reorder" 00:05:01.918 Message: lib/security: Defining dependency "security" 00:05:01.918 Has header "linux/userfaultfd.h" : YES 00:05:01.918 Has header "linux/vduse.h" : YES 00:05:01.918 Message: lib/vhost: Defining dependency "vhost" 00:05:01.918 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:01.918 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:01.918 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:01.918 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:01.918 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:01.918 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:01.918 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:01.918 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:01.918 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:01.918 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:01.918 Program doxygen found: YES (/usr/bin/doxygen) 00:05:01.918 Configuring doxy-api-html.conf using configuration 00:05:01.918 Configuring doxy-api-man.conf using configuration 00:05:01.918 Program mandb found: YES (/usr/bin/mandb) 00:05:01.918 Program sphinx-build found: NO 00:05:01.918 Configuring rte_build_config.h using configuration 00:05:01.918 Message: 00:05:01.918 ================= 00:05:01.918 Applications Enabled 00:05:01.918 ================= 00:05:01.918 00:05:01.918 apps: 00:05:01.918 00:05:01.918 00:05:01.918 Message: 00:05:01.918 ================= 00:05:01.918 Libraries Enabled 00:05:01.918 ================= 00:05:01.918 00:05:01.918 libs: 00:05:01.918 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:01.918 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:01.918 cryptodev, dmadev, power, reorder, security, vhost, 00:05:01.918 00:05:01.918 Message: 00:05:01.918 =============== 00:05:01.918 Drivers Enabled 00:05:01.918 =============== 00:05:01.918 00:05:01.918 common: 00:05:01.918 00:05:01.918 bus: 00:05:01.918 pci, vdev, 00:05:01.918 mempool: 00:05:01.918 ring, 00:05:01.918 dma: 00:05:01.918 00:05:01.918 net: 00:05:01.918 00:05:01.918 crypto: 00:05:01.918 00:05:01.918 compress: 00:05:01.918 00:05:01.918 vdpa: 00:05:01.918 00:05:01.918 00:05:01.918 Message: 00:05:01.918 ================= 00:05:01.918 Content Skipped 00:05:01.918 ================= 00:05:01.918 00:05:01.918 apps: 00:05:01.918 dumpcap: explicitly disabled via build config 00:05:01.918 graph: explicitly disabled via build config 00:05:01.918 pdump: explicitly disabled via build config 00:05:01.918 proc-info: explicitly disabled via build config 00:05:01.918 test-acl: explicitly disabled via build config 00:05:01.918 test-bbdev: explicitly disabled via build config 00:05:01.918 test-cmdline: explicitly disabled via build config 00:05:01.918 test-compress-perf: explicitly disabled via build config 00:05:01.918 test-crypto-perf: explicitly disabled via build config 00:05:01.918 test-dma-perf: explicitly disabled via build config 00:05:01.918 test-eventdev: explicitly disabled via build config 00:05:01.918 test-fib: explicitly disabled via build config 00:05:01.918 test-flow-perf: explicitly disabled via build config 00:05:01.918 test-gpudev: explicitly disabled via build config 00:05:01.918 test-mldev: explicitly disabled via build config 00:05:01.918 test-pipeline: explicitly disabled via build config 00:05:01.918 test-pmd: explicitly disabled via build config 00:05:01.919 test-regex: explicitly disabled via build config 00:05:01.919 test-sad: explicitly disabled via build config 00:05:01.919 test-security-perf: explicitly disabled via build config 00:05:01.919 00:05:01.919 libs: 00:05:01.919 metrics: explicitly disabled via build config 00:05:01.919 acl: explicitly disabled via build config 00:05:01.919 bbdev: explicitly disabled via build config 00:05:01.919 bitratestats: explicitly disabled via build config 00:05:01.919 bpf: explicitly disabled via build config 00:05:01.919 cfgfile: explicitly disabled via build config 00:05:01.919 distributor: explicitly disabled via build config 00:05:01.919 efd: explicitly disabled via build config 00:05:01.919 eventdev: explicitly disabled via build config 00:05:01.919 dispatcher: explicitly disabled via build config 00:05:01.919 gpudev: explicitly disabled via build config 00:05:01.919 gro: explicitly disabled via build config 00:05:01.919 gso: explicitly disabled via build config 00:05:01.919 ip_frag: explicitly disabled via build config 00:05:01.919 jobstats: explicitly disabled via build config 00:05:01.919 latencystats: explicitly disabled via build config 00:05:01.919 lpm: explicitly disabled via build config 00:05:01.919 member: explicitly disabled via build config 00:05:01.919 pcapng: explicitly disabled via build config 00:05:01.919 rawdev: explicitly disabled via build config 00:05:01.919 regexdev: explicitly disabled via build config 00:05:01.919 mldev: explicitly disabled via build config 00:05:01.919 rib: explicitly disabled via build config 00:05:01.919 sched: explicitly disabled via build config 00:05:01.919 stack: explicitly disabled via build config 00:05:01.919 ipsec: explicitly disabled via build config 00:05:01.919 pdcp: explicitly disabled via build config 00:05:01.919 fib: explicitly disabled via build config 00:05:01.919 port: explicitly disabled via build config 00:05:01.919 pdump: explicitly disabled via build config 00:05:01.919 table: explicitly disabled via build config 00:05:01.919 pipeline: explicitly disabled via build config 00:05:01.919 graph: explicitly disabled via build config 00:05:01.919 node: explicitly disabled via build config 00:05:01.919 00:05:01.919 drivers: 00:05:01.919 common/cpt: not in enabled drivers build config 00:05:01.919 common/dpaax: not in enabled drivers build config 00:05:01.919 common/iavf: not in enabled drivers build config 00:05:01.919 common/idpf: not in enabled drivers build config 00:05:01.919 common/mvep: not in enabled drivers build config 00:05:01.919 common/octeontx: not in enabled drivers build config 00:05:01.919 bus/auxiliary: not in enabled drivers build config 00:05:01.919 bus/cdx: not in enabled drivers build config 00:05:01.919 bus/dpaa: not in enabled drivers build config 00:05:01.919 bus/fslmc: not in enabled drivers build config 00:05:01.919 bus/ifpga: not in enabled drivers build config 00:05:01.919 bus/platform: not in enabled drivers build config 00:05:01.919 bus/vmbus: not in enabled drivers build config 00:05:01.919 common/cnxk: not in enabled drivers build config 00:05:01.919 common/mlx5: not in enabled drivers build config 00:05:01.919 common/nfp: not in enabled drivers build config 00:05:01.919 common/qat: not in enabled drivers build config 00:05:01.919 common/sfc_efx: not in enabled drivers build config 00:05:01.919 mempool/bucket: not in enabled drivers build config 00:05:01.919 mempool/cnxk: not in enabled drivers build config 00:05:01.919 mempool/dpaa: not in enabled drivers build config 00:05:01.919 mempool/dpaa2: not in enabled drivers build config 00:05:01.919 mempool/octeontx: not in enabled drivers build config 00:05:01.919 mempool/stack: not in enabled drivers build config 00:05:01.919 dma/cnxk: not in enabled drivers build config 00:05:01.919 dma/dpaa: not in enabled drivers build config 00:05:01.919 dma/dpaa2: not in enabled drivers build config 00:05:01.919 dma/hisilicon: not in enabled drivers build config 00:05:01.919 dma/idxd: not in enabled drivers build config 00:05:01.919 dma/ioat: not in enabled drivers build config 00:05:01.919 dma/skeleton: not in enabled drivers build config 00:05:01.919 net/af_packet: not in enabled drivers build config 00:05:01.919 net/af_xdp: not in enabled drivers build config 00:05:01.919 net/ark: not in enabled drivers build config 00:05:01.919 net/atlantic: not in enabled drivers build config 00:05:01.919 net/avp: not in enabled drivers build config 00:05:01.919 net/axgbe: not in enabled drivers build config 00:05:01.919 net/bnx2x: not in enabled drivers build config 00:05:01.919 net/bnxt: not in enabled drivers build config 00:05:01.919 net/bonding: not in enabled drivers build config 00:05:01.919 net/cnxk: not in enabled drivers build config 00:05:01.919 net/cpfl: not in enabled drivers build config 00:05:01.919 net/cxgbe: not in enabled drivers build config 00:05:01.919 net/dpaa: not in enabled drivers build config 00:05:01.919 net/dpaa2: not in enabled drivers build config 00:05:01.919 net/e1000: not in enabled drivers build config 00:05:01.919 net/ena: not in enabled drivers build config 00:05:01.919 net/enetc: not in enabled drivers build config 00:05:01.919 net/enetfec: not in enabled drivers build config 00:05:01.919 net/enic: not in enabled drivers build config 00:05:01.919 net/failsafe: not in enabled drivers build config 00:05:01.919 net/fm10k: not in enabled drivers build config 00:05:01.919 net/gve: not in enabled drivers build config 00:05:01.919 net/hinic: not in enabled drivers build config 00:05:01.919 net/hns3: not in enabled drivers build config 00:05:01.919 net/i40e: not in enabled drivers build config 00:05:01.919 net/iavf: not in enabled drivers build config 00:05:01.919 net/ice: not in enabled drivers build config 00:05:01.919 net/idpf: not in enabled drivers build config 00:05:01.919 net/igc: not in enabled drivers build config 00:05:01.919 net/ionic: not in enabled drivers build config 00:05:01.919 net/ipn3ke: not in enabled drivers build config 00:05:01.919 net/ixgbe: not in enabled drivers build config 00:05:01.919 net/mana: not in enabled drivers build config 00:05:01.919 net/memif: not in enabled drivers build config 00:05:01.919 net/mlx4: not in enabled drivers build config 00:05:01.919 net/mlx5: not in enabled drivers build config 00:05:01.919 net/mvneta: not in enabled drivers build config 00:05:01.919 net/mvpp2: not in enabled drivers build config 00:05:01.919 net/netvsc: not in enabled drivers build config 00:05:01.919 net/nfb: not in enabled drivers build config 00:05:01.919 net/nfp: not in enabled drivers build config 00:05:01.919 net/ngbe: not in enabled drivers build config 00:05:01.919 net/null: not in enabled drivers build config 00:05:01.919 net/octeontx: not in enabled drivers build config 00:05:01.919 net/octeon_ep: not in enabled drivers build config 00:05:01.919 net/pcap: not in enabled drivers build config 00:05:01.919 net/pfe: not in enabled drivers build config 00:05:01.919 net/qede: not in enabled drivers build config 00:05:01.919 net/ring: not in enabled drivers build config 00:05:01.919 net/sfc: not in enabled drivers build config 00:05:01.919 net/softnic: not in enabled drivers build config 00:05:01.919 net/tap: not in enabled drivers build config 00:05:01.919 net/thunderx: not in enabled drivers build config 00:05:01.919 net/txgbe: not in enabled drivers build config 00:05:01.919 net/vdev_netvsc: not in enabled drivers build config 00:05:01.919 net/vhost: not in enabled drivers build config 00:05:01.919 net/virtio: not in enabled drivers build config 00:05:01.919 net/vmxnet3: not in enabled drivers build config 00:05:01.919 raw/*: missing internal dependency, "rawdev" 00:05:01.919 crypto/armv8: not in enabled drivers build config 00:05:01.919 crypto/bcmfs: not in enabled drivers build config 00:05:01.919 crypto/caam_jr: not in enabled drivers build config 00:05:01.919 crypto/ccp: not in enabled drivers build config 00:05:01.919 crypto/cnxk: not in enabled drivers build config 00:05:01.919 crypto/dpaa_sec: not in enabled drivers build config 00:05:01.919 crypto/dpaa2_sec: not in enabled drivers build config 00:05:01.919 crypto/ipsec_mb: not in enabled drivers build config 00:05:01.919 crypto/mlx5: not in enabled drivers build config 00:05:01.919 crypto/mvsam: not in enabled drivers build config 00:05:01.919 crypto/nitrox: not in enabled drivers build config 00:05:01.919 crypto/null: not in enabled drivers build config 00:05:01.919 crypto/octeontx: not in enabled drivers build config 00:05:01.919 crypto/openssl: not in enabled drivers build config 00:05:01.919 crypto/scheduler: not in enabled drivers build config 00:05:01.919 crypto/uadk: not in enabled drivers build config 00:05:01.919 crypto/virtio: not in enabled drivers build config 00:05:01.919 compress/isal: not in enabled drivers build config 00:05:01.919 compress/mlx5: not in enabled drivers build config 00:05:01.919 compress/octeontx: not in enabled drivers build config 00:05:01.919 compress/zlib: not in enabled drivers build config 00:05:01.919 regex/*: missing internal dependency, "regexdev" 00:05:01.919 ml/*: missing internal dependency, "mldev" 00:05:01.919 vdpa/ifc: not in enabled drivers build config 00:05:01.919 vdpa/mlx5: not in enabled drivers build config 00:05:01.919 vdpa/nfp: not in enabled drivers build config 00:05:01.919 vdpa/sfc: not in enabled drivers build config 00:05:01.919 event/*: missing internal dependency, "eventdev" 00:05:01.919 baseband/*: missing internal dependency, "bbdev" 00:05:01.919 gpu/*: missing internal dependency, "gpudev" 00:05:01.919 00:05:01.919 00:05:01.919 Build targets in project: 85 00:05:01.919 00:05:01.919 DPDK 23.11.0 00:05:01.919 00:05:01.919 User defined options 00:05:01.919 buildtype : debug 00:05:01.919 default_library : shared 00:05:01.919 libdir : lib 00:05:01.919 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:01.919 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:01.919 c_link_args : 00:05:01.919 cpu_instruction_set: native 00:05:01.919 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:05:01.919 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:05:01.919 enable_docs : false 00:05:01.919 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:01.919 enable_kmods : false 00:05:01.919 tests : false 00:05:01.919 00:05:01.919 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:02.189 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:02.189 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:02.189 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:02.453 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:02.453 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:02.453 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:02.453 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:02.453 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:02.453 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:02.453 [9/265] Linking static target lib/librte_kvargs.a 00:05:02.453 [10/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:02.453 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:02.453 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:02.453 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:02.453 [14/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:02.453 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:02.453 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:02.453 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:02.453 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:02.453 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:02.453 [20/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:02.453 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:02.453 [22/265] Linking static target lib/librte_log.a 00:05:02.453 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:02.453 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:02.453 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:02.453 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:02.453 [27/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:02.453 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:02.453 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:02.453 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:02.453 [31/265] Linking static target lib/librte_pci.a 00:05:02.453 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:02.453 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:02.718 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:02.718 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:02.718 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:02.718 [37/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:02.718 [38/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:02.718 [39/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:02.718 [40/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:02.718 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:02.718 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:02.983 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:02.983 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:02.983 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:02.983 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:02.983 [47/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:02.983 [48/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:02.983 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:02.983 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:02.983 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:02.983 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:02.983 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:02.983 [54/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:02.983 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:02.983 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:02.983 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:02.983 [58/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:02.983 [59/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:02.983 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:02.983 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:02.983 [62/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:02.983 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:02.983 [64/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:02.983 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:02.983 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:02.983 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:02.983 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:02.983 [69/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:02.983 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:02.983 [71/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.983 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:02.983 [73/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:02.983 [74/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:02.983 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:02.983 [76/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:02.983 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:02.983 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:02.983 [79/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:02.983 [80/265] Linking static target lib/librte_meter.a 00:05:02.983 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:02.983 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:02.983 [83/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:02.983 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:02.983 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:02.983 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:02.983 [87/265] Linking static target lib/librte_ring.a 00:05:02.983 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:02.983 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:02.983 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:02.983 [91/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:02.983 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:02.983 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:02.983 [94/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.983 [95/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:02.983 [96/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:02.983 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:02.983 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:02.983 [99/265] Linking static target lib/librte_telemetry.a 00:05:02.983 [100/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:02.983 [101/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:02.983 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:02.983 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:02.983 [104/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:02.983 [105/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:02.983 [106/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:02.983 [107/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:02.983 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:02.983 [109/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:02.983 [110/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:02.983 [111/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:02.983 [112/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:02.983 [113/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:02.983 [114/265] Linking static target lib/librte_timer.a 00:05:02.983 [115/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:02.983 [116/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:02.983 [117/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:02.983 [118/265] Linking static target lib/librte_cmdline.a 00:05:02.983 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:02.983 [120/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:02.983 [121/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:02.983 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:02.983 [123/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:02.983 [124/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:02.983 [125/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:02.983 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:02.984 [127/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:02.984 [128/265] Linking static target lib/librte_rcu.a 00:05:02.984 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:02.984 [130/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:02.984 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:02.984 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:02.984 [133/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:02.984 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:02.984 [135/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:02.984 [136/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:02.984 [137/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:02.984 [138/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:02.984 [139/265] Linking static target lib/librte_compressdev.a 00:05:02.984 [140/265] Linking static target lib/librte_mempool.a 00:05:03.242 [141/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:03.242 [142/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:03.242 [143/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:03.242 [144/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:03.242 [145/265] Linking static target lib/librte_net.a 00:05:03.242 [146/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:03.242 [147/265] Linking static target lib/librte_dmadev.a 00:05:03.242 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:03.242 [149/265] Linking static target lib/librte_eal.a 00:05:03.242 [150/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:03.242 [151/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:03.242 [152/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:03.242 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:03.242 [154/265] Linking static target lib/librte_reorder.a 00:05:03.242 [155/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:03.242 [156/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:03.242 [157/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.242 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:03.242 [159/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:03.242 [160/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.242 [161/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:03.242 [162/265] Linking static target lib/librte_power.a 00:05:03.242 [163/265] Linking static target lib/librte_mbuf.a 00:05:03.242 [164/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:03.243 [165/265] Linking target lib/librte_log.so.24.0 00:05:03.243 [166/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:03.243 [167/265] Linking static target lib/librte_security.a 00:05:03.243 [168/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:03.243 [169/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:03.243 [170/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:03.243 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:03.243 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:03.243 [173/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:03.243 [174/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:03.243 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:03.243 [176/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.502 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:03.502 [178/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:03.502 [179/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:05:03.502 [180/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:03.502 [181/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:03.502 [182/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:03.502 [183/265] Linking target lib/librte_kvargs.so.24.0 00:05:03.502 [184/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.502 [185/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:03.502 [186/265] Linking static target lib/librte_hash.a 00:05:03.502 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:03.502 [188/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:03.502 [189/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:03.502 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:03.502 [191/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.502 [192/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.502 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:03.502 [194/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:03.502 [195/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:03.502 [196/265] Linking static target drivers/librte_bus_vdev.a 00:05:03.502 [197/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:03.502 [198/265] Linking static target lib/librte_cryptodev.a 00:05:03.502 [199/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.502 [200/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:05:03.502 [201/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:03.502 [202/265] Linking target lib/librte_telemetry.so.24.0 00:05:03.761 [203/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.761 [204/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:03.761 [205/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:03.761 [206/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:03.761 [207/265] Linking static target drivers/librte_bus_pci.a 00:05:03.761 [208/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.761 [209/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:03.761 [210/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:03.761 [211/265] Linking static target drivers/librte_mempool_ring.a 00:05:03.761 [212/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:05:03.761 [213/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.761 [214/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.020 [215/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.020 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:04.020 [217/265] Linking static target lib/librte_ethdev.a 00:05:04.020 [218/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:04.020 [219/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.021 [220/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.280 [221/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.280 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.280 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.540 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.110 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:05.110 [226/265] Linking static target lib/librte_vhost.a 00:05:05.712 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.612 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.170 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.543 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.800 [231/265] Linking target lib/librte_eal.so.24.0 00:05:15.800 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:05:15.800 [233/265] Linking target lib/librte_meter.so.24.0 00:05:15.800 [234/265] Linking target lib/librte_timer.so.24.0 00:05:15.800 [235/265] Linking target lib/librte_ring.so.24.0 00:05:15.800 [236/265] Linking target lib/librte_pci.so.24.0 00:05:15.800 [237/265] Linking target lib/librte_dmadev.so.24.0 00:05:15.800 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:05:16.058 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:05:16.058 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:05:16.058 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:05:16.058 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:05:16.058 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:05:16.058 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:05:16.058 [245/265] Linking target lib/librte_rcu.so.24.0 00:05:16.058 [246/265] Linking target lib/librte_mempool.so.24.0 00:05:16.317 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:05:16.317 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:05:16.317 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:05:16.317 [250/265] Linking target lib/librte_mbuf.so.24.0 00:05:16.317 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:05:16.576 [252/265] Linking target lib/librte_reorder.so.24.0 00:05:16.576 [253/265] Linking target lib/librte_cryptodev.so.24.0 00:05:16.576 [254/265] Linking target lib/librte_compressdev.so.24.0 00:05:16.576 [255/265] Linking target lib/librte_net.so.24.0 00:05:16.576 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:05:16.576 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:05:16.576 [258/265] Linking target lib/librte_cmdline.so.24.0 00:05:16.576 [259/265] Linking target lib/librte_security.so.24.0 00:05:16.576 [260/265] Linking target lib/librte_hash.so.24.0 00:05:16.835 [261/265] Linking target lib/librte_ethdev.so.24.0 00:05:16.835 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:05:16.835 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:05:16.835 [264/265] Linking target lib/librte_power.so.24.0 00:05:16.835 [265/265] Linking target lib/librte_vhost.so.24.0 00:05:16.835 INFO: autodetecting backend as ninja 00:05:16.835 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:05:18.212 CC lib/log/log.o 00:05:18.212 CC lib/log/log_flags.o 00:05:18.212 CC lib/log/log_deprecated.o 00:05:18.212 CC lib/ut/ut.o 00:05:18.212 CC lib/ut_mock/mock.o 00:05:18.212 LIB libspdk_ut_mock.a 00:05:18.212 LIB libspdk_log.a 00:05:18.212 LIB libspdk_ut.a 00:05:18.212 SO libspdk_log.so.7.0 00:05:18.212 SO libspdk_ut_mock.so.6.0 00:05:18.212 SO libspdk_ut.so.2.0 00:05:18.212 SYMLINK libspdk_log.so 00:05:18.212 SYMLINK libspdk_ut_mock.so 00:05:18.212 SYMLINK libspdk_ut.so 00:05:18.471 CC lib/util/base64.o 00:05:18.471 CC lib/util/crc16.o 00:05:18.471 CC lib/util/bit_array.o 00:05:18.471 CC lib/util/cpuset.o 00:05:18.471 CC lib/util/crc32.o 00:05:18.471 CC lib/util/dif.o 00:05:18.471 CC lib/util/crc32c.o 00:05:18.471 CC lib/util/crc32_ieee.o 00:05:18.471 CC lib/util/crc64.o 00:05:18.471 CC lib/util/fd.o 00:05:18.471 CC lib/util/iov.o 00:05:18.471 CC lib/util/file.o 00:05:18.471 CC lib/util/hexlify.o 00:05:18.471 CC lib/util/string.o 00:05:18.471 CC lib/util/math.o 00:05:18.471 CC lib/util/pipe.o 00:05:18.471 CC lib/util/fd_group.o 00:05:18.471 CC lib/util/strerror_tls.o 00:05:18.471 CC lib/util/uuid.o 00:05:18.471 CC lib/util/xor.o 00:05:18.471 CC lib/util/zipf.o 00:05:18.471 CXX lib/trace_parser/trace.o 00:05:18.471 CC lib/dma/dma.o 00:05:18.471 CC lib/ioat/ioat.o 00:05:18.736 CC lib/vfio_user/host/vfio_user_pci.o 00:05:18.736 CC lib/vfio_user/host/vfio_user.o 00:05:18.736 LIB libspdk_dma.a 00:05:18.736 SO libspdk_dma.so.4.0 00:05:18.736 LIB libspdk_ioat.a 00:05:19.017 SYMLINK libspdk_dma.so 00:05:19.017 SO libspdk_ioat.so.7.0 00:05:19.017 LIB libspdk_vfio_user.a 00:05:19.017 SYMLINK libspdk_ioat.so 00:05:19.017 LIB libspdk_util.a 00:05:19.017 SO libspdk_vfio_user.so.5.0 00:05:19.017 SO libspdk_util.so.9.0 00:05:19.017 SYMLINK libspdk_vfio_user.so 00:05:19.308 SYMLINK libspdk_util.so 00:05:19.308 LIB libspdk_trace_parser.a 00:05:19.308 SO libspdk_trace_parser.so.5.0 00:05:19.308 SYMLINK libspdk_trace_parser.so 00:05:19.565 CC lib/conf/conf.o 00:05:19.565 CC lib/json/json_parse.o 00:05:19.565 CC lib/env_dpdk/env.o 00:05:19.565 CC lib/env_dpdk/memory.o 00:05:19.565 CC lib/json/json_util.o 00:05:19.565 CC lib/json/json_write.o 00:05:19.565 CC lib/env_dpdk/pci.o 00:05:19.565 CC lib/env_dpdk/init.o 00:05:19.565 CC lib/env_dpdk/threads.o 00:05:19.565 CC lib/env_dpdk/pci_ioat.o 00:05:19.565 CC lib/env_dpdk/pci_virtio.o 00:05:19.565 CC lib/env_dpdk/pci_event.o 00:05:19.565 CC lib/env_dpdk/pci_vmd.o 00:05:19.565 CC lib/env_dpdk/pci_idxd.o 00:05:19.565 CC lib/env_dpdk/sigbus_handler.o 00:05:19.565 CC lib/env_dpdk/pci_dpdk.o 00:05:19.565 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:19.565 CC lib/vmd/vmd.o 00:05:19.565 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:19.565 CC lib/rdma/common.o 00:05:19.565 CC lib/vmd/led.o 00:05:19.565 CC lib/rdma/rdma_verbs.o 00:05:19.565 CC lib/idxd/idxd.o 00:05:19.565 CC lib/idxd/idxd_user.o 00:05:19.823 LIB libspdk_conf.a 00:05:19.823 SO libspdk_conf.so.6.0 00:05:19.823 LIB libspdk_rdma.a 00:05:19.823 LIB libspdk_json.a 00:05:19.823 SO libspdk_rdma.so.6.0 00:05:19.823 SYMLINK libspdk_conf.so 00:05:19.823 SO libspdk_json.so.6.0 00:05:19.823 SYMLINK libspdk_rdma.so 00:05:19.823 SYMLINK libspdk_json.so 00:05:20.082 LIB libspdk_idxd.a 00:05:20.082 SO libspdk_idxd.so.12.0 00:05:20.082 LIB libspdk_vmd.a 00:05:20.082 SYMLINK libspdk_idxd.so 00:05:20.082 SO libspdk_vmd.so.6.0 00:05:20.082 SYMLINK libspdk_vmd.so 00:05:20.341 CC lib/jsonrpc/jsonrpc_server.o 00:05:20.341 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:20.341 CC lib/jsonrpc/jsonrpc_client.o 00:05:20.341 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:20.599 LIB libspdk_jsonrpc.a 00:05:20.599 LIB libspdk_env_dpdk.a 00:05:20.599 SO libspdk_jsonrpc.so.6.0 00:05:20.599 SO libspdk_env_dpdk.so.14.0 00:05:20.599 SYMLINK libspdk_jsonrpc.so 00:05:20.599 SYMLINK libspdk_env_dpdk.so 00:05:20.858 CC lib/rpc/rpc.o 00:05:21.117 LIB libspdk_rpc.a 00:05:21.117 SO libspdk_rpc.so.6.0 00:05:21.117 SYMLINK libspdk_rpc.so 00:05:21.684 CC lib/trace/trace.o 00:05:21.684 CC lib/trace/trace_flags.o 00:05:21.684 CC lib/trace/trace_rpc.o 00:05:21.684 CC lib/keyring/keyring.o 00:05:21.684 CC lib/keyring/keyring_rpc.o 00:05:21.684 CC lib/notify/notify.o 00:05:21.684 CC lib/notify/notify_rpc.o 00:05:21.684 LIB libspdk_trace.a 00:05:21.684 LIB libspdk_notify.a 00:05:21.684 LIB libspdk_keyring.a 00:05:21.684 SO libspdk_trace.so.10.0 00:05:21.684 SO libspdk_notify.so.6.0 00:05:21.684 SO libspdk_keyring.so.1.0 00:05:21.942 SYMLINK libspdk_trace.so 00:05:21.942 SYMLINK libspdk_notify.so 00:05:21.942 SYMLINK libspdk_keyring.so 00:05:22.201 CC lib/thread/thread.o 00:05:22.201 CC lib/sock/sock.o 00:05:22.201 CC lib/thread/iobuf.o 00:05:22.201 CC lib/sock/sock_rpc.o 00:05:22.460 LIB libspdk_sock.a 00:05:22.460 SO libspdk_sock.so.9.0 00:05:22.719 SYMLINK libspdk_sock.so 00:05:22.997 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:22.997 CC lib/nvme/nvme_ns_cmd.o 00:05:22.997 CC lib/nvme/nvme_ctrlr.o 00:05:22.997 CC lib/nvme/nvme_fabric.o 00:05:22.997 CC lib/nvme/nvme_ns.o 00:05:22.997 CC lib/nvme/nvme_pcie_common.o 00:05:22.997 CC lib/nvme/nvme_pcie.o 00:05:22.997 CC lib/nvme/nvme_qpair.o 00:05:22.997 CC lib/nvme/nvme.o 00:05:22.997 CC lib/nvme/nvme_quirks.o 00:05:22.998 CC lib/nvme/nvme_transport.o 00:05:22.998 CC lib/nvme/nvme_discovery.o 00:05:22.998 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:22.998 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:22.998 CC lib/nvme/nvme_tcp.o 00:05:22.998 CC lib/nvme/nvme_opal.o 00:05:22.998 CC lib/nvme/nvme_io_msg.o 00:05:22.998 CC lib/nvme/nvme_poll_group.o 00:05:22.998 CC lib/nvme/nvme_zns.o 00:05:22.998 CC lib/nvme/nvme_stubs.o 00:05:22.998 CC lib/nvme/nvme_auth.o 00:05:22.998 CC lib/nvme/nvme_cuse.o 00:05:22.998 CC lib/nvme/nvme_vfio_user.o 00:05:22.998 CC lib/nvme/nvme_rdma.o 00:05:23.263 LIB libspdk_thread.a 00:05:23.263 SO libspdk_thread.so.10.0 00:05:23.263 SYMLINK libspdk_thread.so 00:05:23.831 CC lib/accel/accel.o 00:05:23.831 CC lib/accel/accel_rpc.o 00:05:23.831 CC lib/accel/accel_sw.o 00:05:23.831 CC lib/init/json_config.o 00:05:23.831 CC lib/init/subsystem.o 00:05:23.831 CC lib/init/subsystem_rpc.o 00:05:23.831 CC lib/init/rpc.o 00:05:23.831 CC lib/vfu_tgt/tgt_endpoint.o 00:05:23.831 CC lib/vfu_tgt/tgt_rpc.o 00:05:23.831 CC lib/blob/request.o 00:05:23.831 CC lib/blob/blobstore.o 00:05:23.831 CC lib/virtio/virtio.o 00:05:23.831 CC lib/virtio/virtio_vfio_user.o 00:05:23.831 CC lib/blob/zeroes.o 00:05:23.831 CC lib/virtio/virtio_vhost_user.o 00:05:23.831 CC lib/blob/blob_bs_dev.o 00:05:23.831 CC lib/virtio/virtio_pci.o 00:05:23.831 LIB libspdk_init.a 00:05:23.831 SO libspdk_init.so.5.0 00:05:23.831 LIB libspdk_vfu_tgt.a 00:05:24.090 LIB libspdk_virtio.a 00:05:24.090 SO libspdk_vfu_tgt.so.3.0 00:05:24.090 SYMLINK libspdk_init.so 00:05:24.090 SO libspdk_virtio.so.7.0 00:05:24.090 SYMLINK libspdk_vfu_tgt.so 00:05:24.090 SYMLINK libspdk_virtio.so 00:05:24.349 CC lib/event/app.o 00:05:24.349 CC lib/event/reactor.o 00:05:24.349 CC lib/event/log_rpc.o 00:05:24.350 CC lib/event/app_rpc.o 00:05:24.350 CC lib/event/scheduler_static.o 00:05:24.350 LIB libspdk_accel.a 00:05:24.350 SO libspdk_accel.so.15.0 00:05:24.609 SYMLINK libspdk_accel.so 00:05:24.609 LIB libspdk_nvme.a 00:05:24.609 LIB libspdk_event.a 00:05:24.609 SO libspdk_nvme.so.13.0 00:05:24.609 SO libspdk_event.so.13.0 00:05:24.868 SYMLINK libspdk_event.so 00:05:24.868 CC lib/bdev/bdev.o 00:05:24.868 CC lib/bdev/bdev_rpc.o 00:05:24.868 CC lib/bdev/bdev_zone.o 00:05:24.868 CC lib/bdev/part.o 00:05:24.868 CC lib/bdev/scsi_nvme.o 00:05:24.868 SYMLINK libspdk_nvme.so 00:05:25.806 LIB libspdk_blob.a 00:05:25.806 SO libspdk_blob.so.11.0 00:05:25.806 SYMLINK libspdk_blob.so 00:05:26.065 CC lib/lvol/lvol.o 00:05:26.065 CC lib/blobfs/blobfs.o 00:05:26.065 CC lib/blobfs/tree.o 00:05:26.633 LIB libspdk_bdev.a 00:05:26.633 SO libspdk_bdev.so.15.0 00:05:26.633 LIB libspdk_blobfs.a 00:05:26.891 LIB libspdk_lvol.a 00:05:26.891 SO libspdk_blobfs.so.10.0 00:05:26.891 SYMLINK libspdk_bdev.so 00:05:26.891 SO libspdk_lvol.so.10.0 00:05:26.891 SYMLINK libspdk_blobfs.so 00:05:26.891 SYMLINK libspdk_lvol.so 00:05:27.150 CC lib/scsi/lun.o 00:05:27.150 CC lib/scsi/dev.o 00:05:27.150 CC lib/scsi/port.o 00:05:27.150 CC lib/scsi/scsi.o 00:05:27.150 CC lib/scsi/scsi_bdev.o 00:05:27.150 CC lib/scsi/scsi_pr.o 00:05:27.150 CC lib/scsi/scsi_rpc.o 00:05:27.150 CC lib/scsi/task.o 00:05:27.150 CC lib/nvmf/ctrlr.o 00:05:27.150 CC lib/nvmf/ctrlr_bdev.o 00:05:27.150 CC lib/nvmf/ctrlr_discovery.o 00:05:27.150 CC lib/nvmf/subsystem.o 00:05:27.150 CC lib/ftl/ftl_core.o 00:05:27.150 CC lib/nvmf/nvmf.o 00:05:27.150 CC lib/nvmf/nvmf_rpc.o 00:05:27.150 CC lib/ftl/ftl_init.o 00:05:27.150 CC lib/ftl/ftl_layout.o 00:05:27.150 CC lib/nvmf/transport.o 00:05:27.150 CC lib/ftl/ftl_debug.o 00:05:27.150 CC lib/nvmf/tcp.o 00:05:27.150 CC lib/ftl/ftl_io.o 00:05:27.150 CC lib/nvmf/vfio_user.o 00:05:27.150 CC lib/ftl/ftl_sb.o 00:05:27.150 CC lib/nvmf/rdma.o 00:05:27.150 CC lib/ftl/ftl_l2p.o 00:05:27.150 CC lib/ftl/ftl_l2p_flat.o 00:05:27.150 CC lib/ftl/ftl_band_ops.o 00:05:27.150 CC lib/ftl/ftl_nv_cache.o 00:05:27.150 CC lib/ftl/ftl_band.o 00:05:27.150 CC lib/ftl/ftl_writer.o 00:05:27.150 CC lib/ftl/ftl_rq.o 00:05:27.150 CC lib/ftl/ftl_l2p_cache.o 00:05:27.150 CC lib/ftl/ftl_reloc.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt.o 00:05:27.150 CC lib/ublk/ublk.o 00:05:27.150 CC lib/ftl/ftl_p2l.o 00:05:27.150 CC lib/nbd/nbd.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:27.150 CC lib/ublk/ublk_rpc.o 00:05:27.150 CC lib/nbd/nbd_rpc.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:27.150 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:27.150 CC lib/ftl/utils/ftl_md.o 00:05:27.150 CC lib/ftl/utils/ftl_conf.o 00:05:27.150 CC lib/ftl/utils/ftl_mempool.o 00:05:27.150 CC lib/ftl/utils/ftl_property.o 00:05:27.150 CC lib/ftl/utils/ftl_bitmap.o 00:05:27.150 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:27.150 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:27.150 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:27.150 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:27.150 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:27.150 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:27.150 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:27.150 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:27.150 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:27.150 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:27.150 CC lib/ftl/base/ftl_base_dev.o 00:05:27.150 CC lib/ftl/base/ftl_base_bdev.o 00:05:27.150 CC lib/ftl/ftl_trace.o 00:05:27.719 LIB libspdk_nbd.a 00:05:27.719 SO libspdk_nbd.so.7.0 00:05:27.719 SYMLINK libspdk_nbd.so 00:05:27.719 LIB libspdk_scsi.a 00:05:27.719 LIB libspdk_ublk.a 00:05:27.719 SO libspdk_scsi.so.9.0 00:05:27.719 SO libspdk_ublk.so.3.0 00:05:27.977 SYMLINK libspdk_ublk.so 00:05:27.977 SYMLINK libspdk_scsi.so 00:05:27.977 LIB libspdk_ftl.a 00:05:27.977 SO libspdk_ftl.so.9.0 00:05:28.235 CC lib/iscsi/conn.o 00:05:28.235 CC lib/iscsi/iscsi.o 00:05:28.235 CC lib/iscsi/init_grp.o 00:05:28.235 CC lib/iscsi/param.o 00:05:28.235 CC lib/iscsi/portal_grp.o 00:05:28.235 CC lib/iscsi/md5.o 00:05:28.235 CC lib/iscsi/tgt_node.o 00:05:28.235 CC lib/iscsi/iscsi_subsystem.o 00:05:28.235 CC lib/vhost/vhost.o 00:05:28.235 CC lib/iscsi/iscsi_rpc.o 00:05:28.235 CC lib/iscsi/task.o 00:05:28.235 CC lib/vhost/vhost_rpc.o 00:05:28.235 CC lib/vhost/vhost_scsi.o 00:05:28.235 CC lib/vhost/vhost_blk.o 00:05:28.235 CC lib/vhost/rte_vhost_user.o 00:05:28.494 SYMLINK libspdk_ftl.so 00:05:28.756 LIB libspdk_nvmf.a 00:05:28.756 SO libspdk_nvmf.so.18.0 00:05:29.036 SYMLINK libspdk_nvmf.so 00:05:29.036 LIB libspdk_vhost.a 00:05:29.036 SO libspdk_vhost.so.8.0 00:05:29.295 SYMLINK libspdk_vhost.so 00:05:29.295 LIB libspdk_iscsi.a 00:05:29.295 SO libspdk_iscsi.so.8.0 00:05:29.554 SYMLINK libspdk_iscsi.so 00:05:30.122 CC module/env_dpdk/env_dpdk_rpc.o 00:05:30.122 CC module/vfu_device/vfu_virtio.o 00:05:30.122 CC module/vfu_device/vfu_virtio_scsi.o 00:05:30.122 CC module/vfu_device/vfu_virtio_blk.o 00:05:30.122 CC module/vfu_device/vfu_virtio_rpc.o 00:05:30.122 LIB libspdk_env_dpdk_rpc.a 00:05:30.122 CC module/accel/iaa/accel_iaa.o 00:05:30.122 CC module/accel/iaa/accel_iaa_rpc.o 00:05:30.122 CC module/accel/ioat/accel_ioat.o 00:05:30.122 CC module/accel/ioat/accel_ioat_rpc.o 00:05:30.122 CC module/accel/dsa/accel_dsa.o 00:05:30.122 CC module/accel/error/accel_error.o 00:05:30.122 CC module/accel/dsa/accel_dsa_rpc.o 00:05:30.122 CC module/accel/error/accel_error_rpc.o 00:05:30.122 CC module/scheduler/gscheduler/gscheduler.o 00:05:30.122 CC module/blob/bdev/blob_bdev.o 00:05:30.122 CC module/sock/posix/posix.o 00:05:30.122 CC module/keyring/file/keyring.o 00:05:30.122 SO libspdk_env_dpdk_rpc.so.6.0 00:05:30.122 CC module/keyring/file/keyring_rpc.o 00:05:30.122 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:30.122 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:30.122 SYMLINK libspdk_env_dpdk_rpc.so 00:05:30.382 LIB libspdk_scheduler_gscheduler.a 00:05:30.382 LIB libspdk_accel_ioat.a 00:05:30.382 LIB libspdk_keyring_file.a 00:05:30.382 LIB libspdk_scheduler_dpdk_governor.a 00:05:30.382 LIB libspdk_accel_error.a 00:05:30.382 LIB libspdk_accel_iaa.a 00:05:30.382 LIB libspdk_scheduler_dynamic.a 00:05:30.382 SO libspdk_scheduler_gscheduler.so.4.0 00:05:30.382 LIB libspdk_accel_dsa.a 00:05:30.382 SO libspdk_accel_ioat.so.6.0 00:05:30.382 SO libspdk_accel_error.so.2.0 00:05:30.382 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:30.382 SO libspdk_keyring_file.so.1.0 00:05:30.382 SO libspdk_accel_iaa.so.3.0 00:05:30.382 SO libspdk_scheduler_dynamic.so.4.0 00:05:30.382 SO libspdk_accel_dsa.so.5.0 00:05:30.382 LIB libspdk_blob_bdev.a 00:05:30.382 SYMLINK libspdk_scheduler_gscheduler.so 00:05:30.382 SYMLINK libspdk_accel_ioat.so 00:05:30.382 SYMLINK libspdk_keyring_file.so 00:05:30.382 SYMLINK libspdk_accel_error.so 00:05:30.382 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:30.382 SYMLINK libspdk_accel_iaa.so 00:05:30.382 SYMLINK libspdk_scheduler_dynamic.so 00:05:30.382 SO libspdk_blob_bdev.so.11.0 00:05:30.382 SYMLINK libspdk_accel_dsa.so 00:05:30.382 LIB libspdk_vfu_device.a 00:05:30.642 SYMLINK libspdk_blob_bdev.so 00:05:30.642 SO libspdk_vfu_device.so.3.0 00:05:30.642 SYMLINK libspdk_vfu_device.so 00:05:30.901 LIB libspdk_sock_posix.a 00:05:30.901 SO libspdk_sock_posix.so.6.0 00:05:30.901 SYMLINK libspdk_sock_posix.so 00:05:30.901 CC module/bdev/aio/bdev_aio.o 00:05:30.901 CC module/bdev/aio/bdev_aio_rpc.o 00:05:30.901 CC module/bdev/delay/vbdev_delay.o 00:05:30.901 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:30.901 CC module/blobfs/bdev/blobfs_bdev.o 00:05:30.901 CC module/bdev/raid/bdev_raid.o 00:05:30.901 CC module/bdev/error/vbdev_error.o 00:05:30.901 CC module/bdev/raid/bdev_raid_sb.o 00:05:30.901 CC module/bdev/raid/bdev_raid_rpc.o 00:05:30.901 CC module/bdev/error/vbdev_error_rpc.o 00:05:30.901 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:30.901 CC module/bdev/raid/raid0.o 00:05:30.901 CC module/bdev/raid/raid1.o 00:05:30.902 CC module/bdev/nvme/bdev_nvme.o 00:05:30.902 CC module/bdev/nvme/nvme_rpc.o 00:05:30.902 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:30.902 CC module/bdev/raid/concat.o 00:05:30.902 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:30.902 CC module/bdev/nvme/vbdev_opal.o 00:05:30.902 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:30.902 CC module/bdev/nvme/bdev_mdns_client.o 00:05:30.902 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:30.902 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:30.902 CC module/bdev/passthru/vbdev_passthru.o 00:05:30.902 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:30.902 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:30.902 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:30.902 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:30.902 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:30.902 CC module/bdev/malloc/bdev_malloc.o 00:05:30.902 CC module/bdev/split/vbdev_split.o 00:05:30.902 CC module/bdev/lvol/vbdev_lvol.o 00:05:30.902 CC module/bdev/split/vbdev_split_rpc.o 00:05:30.902 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:31.160 CC module/bdev/gpt/gpt.o 00:05:31.160 CC module/bdev/iscsi/bdev_iscsi.o 00:05:31.160 CC module/bdev/gpt/vbdev_gpt.o 00:05:31.160 CC module/bdev/null/bdev_null_rpc.o 00:05:31.160 CC module/bdev/null/bdev_null.o 00:05:31.160 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:31.160 CC module/bdev/ftl/bdev_ftl.o 00:05:31.160 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:31.160 LIB libspdk_blobfs_bdev.a 00:05:31.160 SO libspdk_blobfs_bdev.so.6.0 00:05:31.418 LIB libspdk_bdev_error.a 00:05:31.418 LIB libspdk_bdev_split.a 00:05:31.418 SO libspdk_bdev_error.so.6.0 00:05:31.418 LIB libspdk_bdev_gpt.a 00:05:31.418 LIB libspdk_bdev_null.a 00:05:31.418 SYMLINK libspdk_blobfs_bdev.so 00:05:31.418 LIB libspdk_bdev_aio.a 00:05:31.418 SO libspdk_bdev_split.so.6.0 00:05:31.418 LIB libspdk_bdev_ftl.a 00:05:31.418 LIB libspdk_bdev_passthru.a 00:05:31.418 LIB libspdk_bdev_zone_block.a 00:05:31.418 LIB libspdk_bdev_delay.a 00:05:31.418 SO libspdk_bdev_null.so.6.0 00:05:31.418 SO libspdk_bdev_gpt.so.6.0 00:05:31.418 SO libspdk_bdev_aio.so.6.0 00:05:31.418 SYMLINK libspdk_bdev_error.so 00:05:31.418 SO libspdk_bdev_ftl.so.6.0 00:05:31.418 SYMLINK libspdk_bdev_split.so 00:05:31.418 LIB libspdk_bdev_malloc.a 00:05:31.418 SO libspdk_bdev_zone_block.so.6.0 00:05:31.418 LIB libspdk_bdev_iscsi.a 00:05:31.418 SO libspdk_bdev_passthru.so.6.0 00:05:31.418 SO libspdk_bdev_delay.so.6.0 00:05:31.418 SYMLINK libspdk_bdev_gpt.so 00:05:31.418 SYMLINK libspdk_bdev_null.so 00:05:31.418 SO libspdk_bdev_malloc.so.6.0 00:05:31.418 SO libspdk_bdev_iscsi.so.6.0 00:05:31.418 SYMLINK libspdk_bdev_aio.so 00:05:31.418 SYMLINK libspdk_bdev_zone_block.so 00:05:31.418 SYMLINK libspdk_bdev_ftl.so 00:05:31.418 SYMLINK libspdk_bdev_passthru.so 00:05:31.418 LIB libspdk_bdev_lvol.a 00:05:31.418 SYMLINK libspdk_bdev_delay.so 00:05:31.418 SYMLINK libspdk_bdev_iscsi.so 00:05:31.418 SYMLINK libspdk_bdev_malloc.so 00:05:31.418 LIB libspdk_bdev_virtio.a 00:05:31.418 SO libspdk_bdev_lvol.so.6.0 00:05:31.677 SO libspdk_bdev_virtio.so.6.0 00:05:31.677 SYMLINK libspdk_bdev_lvol.so 00:05:31.677 SYMLINK libspdk_bdev_virtio.so 00:05:31.677 LIB libspdk_bdev_raid.a 00:05:31.937 SO libspdk_bdev_raid.so.6.0 00:05:31.937 SYMLINK libspdk_bdev_raid.so 00:05:32.506 LIB libspdk_bdev_nvme.a 00:05:32.766 SO libspdk_bdev_nvme.so.7.0 00:05:32.766 SYMLINK libspdk_bdev_nvme.so 00:05:33.706 CC module/event/subsystems/scheduler/scheduler.o 00:05:33.706 CC module/event/subsystems/keyring/keyring.o 00:05:33.706 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:33.706 CC module/event/subsystems/vmd/vmd.o 00:05:33.706 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:33.706 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:33.706 CC module/event/subsystems/sock/sock.o 00:05:33.706 CC module/event/subsystems/iobuf/iobuf.o 00:05:33.706 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:33.706 LIB libspdk_event_sock.a 00:05:33.706 LIB libspdk_event_keyring.a 00:05:33.706 LIB libspdk_event_scheduler.a 00:05:33.706 LIB libspdk_event_vhost_blk.a 00:05:33.706 LIB libspdk_event_vmd.a 00:05:33.706 LIB libspdk_event_vfu_tgt.a 00:05:33.706 SO libspdk_event_scheduler.so.4.0 00:05:33.706 SO libspdk_event_sock.so.5.0 00:05:33.706 SO libspdk_event_keyring.so.1.0 00:05:33.706 LIB libspdk_event_iobuf.a 00:05:33.706 SO libspdk_event_vmd.so.6.0 00:05:33.706 SO libspdk_event_vhost_blk.so.3.0 00:05:33.706 SO libspdk_event_vfu_tgt.so.3.0 00:05:33.706 SYMLINK libspdk_event_scheduler.so 00:05:33.706 SO libspdk_event_iobuf.so.3.0 00:05:33.706 SYMLINK libspdk_event_sock.so 00:05:33.706 SYMLINK libspdk_event_keyring.so 00:05:33.706 SYMLINK libspdk_event_vhost_blk.so 00:05:33.706 SYMLINK libspdk_event_vmd.so 00:05:33.706 SYMLINK libspdk_event_vfu_tgt.so 00:05:33.706 SYMLINK libspdk_event_iobuf.so 00:05:34.276 CC module/event/subsystems/accel/accel.o 00:05:34.276 LIB libspdk_event_accel.a 00:05:34.276 SO libspdk_event_accel.so.6.0 00:05:34.276 SYMLINK libspdk_event_accel.so 00:05:34.846 CC module/event/subsystems/bdev/bdev.o 00:05:34.846 LIB libspdk_event_bdev.a 00:05:34.846 SO libspdk_event_bdev.so.6.0 00:05:35.106 SYMLINK libspdk_event_bdev.so 00:05:35.365 CC module/event/subsystems/ublk/ublk.o 00:05:35.365 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:35.365 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:35.365 CC module/event/subsystems/nbd/nbd.o 00:05:35.365 CC module/event/subsystems/scsi/scsi.o 00:05:35.365 LIB libspdk_event_ublk.a 00:05:35.365 LIB libspdk_event_nbd.a 00:05:35.624 LIB libspdk_event_scsi.a 00:05:35.624 SO libspdk_event_ublk.so.3.0 00:05:35.624 SO libspdk_event_nbd.so.6.0 00:05:35.624 LIB libspdk_event_nvmf.a 00:05:35.624 SO libspdk_event_scsi.so.6.0 00:05:35.624 SYMLINK libspdk_event_ublk.so 00:05:35.624 SO libspdk_event_nvmf.so.6.0 00:05:35.624 SYMLINK libspdk_event_nbd.so 00:05:35.624 SYMLINK libspdk_event_scsi.so 00:05:35.624 SYMLINK libspdk_event_nvmf.so 00:05:35.883 CC module/event/subsystems/iscsi/iscsi.o 00:05:35.883 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:36.142 LIB libspdk_event_iscsi.a 00:05:36.143 SO libspdk_event_iscsi.so.6.0 00:05:36.143 LIB libspdk_event_vhost_scsi.a 00:05:36.143 SYMLINK libspdk_event_iscsi.so 00:05:36.143 SO libspdk_event_vhost_scsi.so.3.0 00:05:36.143 SYMLINK libspdk_event_vhost_scsi.so 00:05:36.402 SO libspdk.so.6.0 00:05:36.402 SYMLINK libspdk.so 00:05:36.661 CC test/rpc_client/rpc_client_test.o 00:05:36.926 TEST_HEADER include/spdk/accel_module.h 00:05:36.926 TEST_HEADER include/spdk/accel.h 00:05:36.926 TEST_HEADER include/spdk/assert.h 00:05:36.926 TEST_HEADER include/spdk/barrier.h 00:05:36.926 TEST_HEADER include/spdk/bdev.h 00:05:36.926 TEST_HEADER include/spdk/base64.h 00:05:36.926 TEST_HEADER include/spdk/bdev_module.h 00:05:36.926 TEST_HEADER include/spdk/bdev_zone.h 00:05:36.926 TEST_HEADER include/spdk/bit_array.h 00:05:36.926 CC app/spdk_nvme_perf/perf.o 00:05:36.926 TEST_HEADER include/spdk/bit_pool.h 00:05:36.926 TEST_HEADER include/spdk/blob_bdev.h 00:05:36.926 TEST_HEADER include/spdk/blobfs.h 00:05:36.926 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:36.926 CC app/trace_record/trace_record.o 00:05:36.926 TEST_HEADER include/spdk/blob.h 00:05:36.926 TEST_HEADER include/spdk/conf.h 00:05:36.926 TEST_HEADER include/spdk/config.h 00:05:36.926 TEST_HEADER include/spdk/cpuset.h 00:05:36.926 TEST_HEADER include/spdk/crc16.h 00:05:36.926 TEST_HEADER include/spdk/crc32.h 00:05:36.926 CXX app/trace/trace.o 00:05:36.926 CC app/spdk_nvme_identify/identify.o 00:05:36.926 TEST_HEADER include/spdk/dif.h 00:05:36.926 TEST_HEADER include/spdk/crc64.h 00:05:36.926 TEST_HEADER include/spdk/dma.h 00:05:36.926 TEST_HEADER include/spdk/endian.h 00:05:36.926 TEST_HEADER include/spdk/env_dpdk.h 00:05:36.926 TEST_HEADER include/spdk/env.h 00:05:36.926 CC app/spdk_top/spdk_top.o 00:05:36.926 TEST_HEADER include/spdk/event.h 00:05:36.926 CC app/spdk_lspci/spdk_lspci.o 00:05:36.926 TEST_HEADER include/spdk/fd_group.h 00:05:36.926 TEST_HEADER include/spdk/fd.h 00:05:36.926 TEST_HEADER include/spdk/file.h 00:05:36.926 TEST_HEADER include/spdk/ftl.h 00:05:36.926 TEST_HEADER include/spdk/hexlify.h 00:05:36.926 TEST_HEADER include/spdk/gpt_spec.h 00:05:36.926 TEST_HEADER include/spdk/histogram_data.h 00:05:36.926 TEST_HEADER include/spdk/idxd.h 00:05:36.926 CC app/spdk_nvme_discover/discovery_aer.o 00:05:36.926 TEST_HEADER include/spdk/idxd_spec.h 00:05:36.926 CC app/nvmf_tgt/nvmf_main.o 00:05:36.926 TEST_HEADER include/spdk/init.h 00:05:36.926 TEST_HEADER include/spdk/ioat.h 00:05:36.926 TEST_HEADER include/spdk/iscsi_spec.h 00:05:36.926 TEST_HEADER include/spdk/ioat_spec.h 00:05:36.926 TEST_HEADER include/spdk/json.h 00:05:36.926 TEST_HEADER include/spdk/jsonrpc.h 00:05:36.926 TEST_HEADER include/spdk/keyring.h 00:05:36.926 TEST_HEADER include/spdk/keyring_module.h 00:05:36.926 TEST_HEADER include/spdk/likely.h 00:05:36.926 TEST_HEADER include/spdk/log.h 00:05:36.926 TEST_HEADER include/spdk/lvol.h 00:05:36.926 TEST_HEADER include/spdk/memory.h 00:05:36.926 TEST_HEADER include/spdk/mmio.h 00:05:36.926 TEST_HEADER include/spdk/nbd.h 00:05:36.926 TEST_HEADER include/spdk/nvme.h 00:05:36.926 TEST_HEADER include/spdk/notify.h 00:05:36.926 TEST_HEADER include/spdk/nvme_intel.h 00:05:36.926 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:36.926 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:36.926 TEST_HEADER include/spdk/nvme_spec.h 00:05:36.926 TEST_HEADER include/spdk/nvme_zns.h 00:05:36.926 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:36.926 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:36.926 TEST_HEADER include/spdk/nvmf.h 00:05:36.926 TEST_HEADER include/spdk/nvmf_spec.h 00:05:36.926 TEST_HEADER include/spdk/nvmf_transport.h 00:05:36.926 TEST_HEADER include/spdk/opal.h 00:05:36.926 TEST_HEADER include/spdk/opal_spec.h 00:05:36.926 TEST_HEADER include/spdk/pci_ids.h 00:05:36.926 TEST_HEADER include/spdk/pipe.h 00:05:36.926 TEST_HEADER include/spdk/queue.h 00:05:36.926 TEST_HEADER include/spdk/reduce.h 00:05:36.926 TEST_HEADER include/spdk/rpc.h 00:05:36.926 TEST_HEADER include/spdk/scheduler.h 00:05:36.926 TEST_HEADER include/spdk/scsi_spec.h 00:05:36.926 TEST_HEADER include/spdk/scsi.h 00:05:36.926 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:36.926 TEST_HEADER include/spdk/sock.h 00:05:36.926 TEST_HEADER include/spdk/string.h 00:05:36.926 TEST_HEADER include/spdk/stdinc.h 00:05:36.926 TEST_HEADER include/spdk/thread.h 00:05:36.926 TEST_HEADER include/spdk/trace.h 00:05:36.926 TEST_HEADER include/spdk/tree.h 00:05:36.926 TEST_HEADER include/spdk/trace_parser.h 00:05:36.926 CC app/spdk_dd/spdk_dd.o 00:05:36.926 TEST_HEADER include/spdk/util.h 00:05:36.926 TEST_HEADER include/spdk/ublk.h 00:05:36.926 CC app/vhost/vhost.o 00:05:36.926 TEST_HEADER include/spdk/uuid.h 00:05:36.926 TEST_HEADER include/spdk/version.h 00:05:36.926 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:36.926 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:36.926 TEST_HEADER include/spdk/xor.h 00:05:36.926 TEST_HEADER include/spdk/vmd.h 00:05:36.926 TEST_HEADER include/spdk/vhost.h 00:05:36.926 TEST_HEADER include/spdk/zipf.h 00:05:36.926 CXX test/cpp_headers/accel.o 00:05:36.926 CXX test/cpp_headers/accel_module.o 00:05:36.926 CC app/iscsi_tgt/iscsi_tgt.o 00:05:36.926 CXX test/cpp_headers/assert.o 00:05:36.926 CXX test/cpp_headers/barrier.o 00:05:36.926 CXX test/cpp_headers/base64.o 00:05:36.926 CXX test/cpp_headers/bdev.o 00:05:36.926 CXX test/cpp_headers/bdev_module.o 00:05:36.926 CXX test/cpp_headers/bit_array.o 00:05:36.926 CXX test/cpp_headers/bdev_zone.o 00:05:36.926 CXX test/cpp_headers/bit_pool.o 00:05:36.926 CXX test/cpp_headers/blobfs_bdev.o 00:05:36.926 CXX test/cpp_headers/blob_bdev.o 00:05:36.926 CXX test/cpp_headers/blobfs.o 00:05:36.926 CXX test/cpp_headers/blob.o 00:05:36.926 CXX test/cpp_headers/conf.o 00:05:36.926 CXX test/cpp_headers/config.o 00:05:36.926 CXX test/cpp_headers/cpuset.o 00:05:36.926 CXX test/cpp_headers/crc16.o 00:05:36.926 CXX test/cpp_headers/crc32.o 00:05:36.926 CXX test/cpp_headers/crc64.o 00:05:36.926 CXX test/cpp_headers/dma.o 00:05:36.926 CXX test/cpp_headers/dif.o 00:05:36.926 CC app/spdk_tgt/spdk_tgt.o 00:05:36.926 CXX test/cpp_headers/endian.o 00:05:36.926 CXX test/cpp_headers/env_dpdk.o 00:05:36.926 CXX test/cpp_headers/event.o 00:05:36.926 CXX test/cpp_headers/env.o 00:05:36.926 CXX test/cpp_headers/fd_group.o 00:05:36.926 CXX test/cpp_headers/fd.o 00:05:36.926 CXX test/cpp_headers/file.o 00:05:36.927 CXX test/cpp_headers/ftl.o 00:05:36.927 CXX test/cpp_headers/gpt_spec.o 00:05:36.927 CXX test/cpp_headers/hexlify.o 00:05:36.927 CXX test/cpp_headers/histogram_data.o 00:05:36.927 CXX test/cpp_headers/idxd.o 00:05:36.927 CXX test/cpp_headers/init.o 00:05:36.927 CXX test/cpp_headers/idxd_spec.o 00:05:36.927 CXX test/cpp_headers/ioat.o 00:05:36.927 CXX test/cpp_headers/ioat_spec.o 00:05:37.200 CC test/nvme/aer/aer.o 00:05:37.200 CC test/nvme/e2edp/nvme_dp.o 00:05:37.200 CC test/nvme/startup/startup.o 00:05:37.200 CC test/nvme/reset/reset.o 00:05:37.200 CC test/nvme/fused_ordering/fused_ordering.o 00:05:37.200 CC test/app/stub/stub.o 00:05:37.200 CC test/event/reactor_perf/reactor_perf.o 00:05:37.200 CC test/nvme/compliance/nvme_compliance.o 00:05:37.200 CC test/event/event_perf/event_perf.o 00:05:37.200 CC test/app/histogram_perf/histogram_perf.o 00:05:37.200 CC test/app/jsoncat/jsoncat.o 00:05:37.200 CC test/nvme/fdp/fdp.o 00:05:37.200 CC test/nvme/simple_copy/simple_copy.o 00:05:37.200 CC test/nvme/sgl/sgl.o 00:05:37.200 CC test/nvme/overhead/overhead.o 00:05:37.201 CC test/nvme/connect_stress/connect_stress.o 00:05:37.201 CC test/event/reactor/reactor.o 00:05:37.201 CC test/nvme/err_injection/err_injection.o 00:05:37.201 CC test/nvme/reserve/reserve.o 00:05:37.201 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:37.201 CC test/nvme/cuse/cuse.o 00:05:37.201 CC test/nvme/boot_partition/boot_partition.o 00:05:37.201 CC test/env/vtophys/vtophys.o 00:05:37.201 CC examples/ioat/perf/perf.o 00:05:37.201 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:37.201 CC examples/ioat/verify/verify.o 00:05:37.201 CC examples/sock/hello_world/hello_sock.o 00:05:37.201 CC test/blobfs/mkfs/mkfs.o 00:05:37.201 CC examples/nvme/hello_world/hello_world.o 00:05:37.201 CC examples/vmd/led/led.o 00:05:37.201 CC test/thread/poller_perf/poller_perf.o 00:05:37.201 CC test/env/memory/memory_ut.o 00:05:37.201 CC examples/nvme/arbitration/arbitration.o 00:05:37.201 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:37.201 CC test/accel/dif/dif.o 00:05:37.201 CC test/env/pci/pci_ut.o 00:05:37.201 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:37.201 CC examples/vmd/lsvmd/lsvmd.o 00:05:37.201 CC test/event/app_repeat/app_repeat.o 00:05:37.201 CC examples/nvme/reconnect/reconnect.o 00:05:37.201 CC examples/nvme/abort/abort.o 00:05:37.201 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:37.201 CC examples/accel/perf/accel_perf.o 00:05:37.201 CC examples/nvme/hotplug/hotplug.o 00:05:37.201 CC examples/idxd/perf/perf.o 00:05:37.201 CC examples/util/zipf/zipf.o 00:05:37.201 CC test/app/bdev_svc/bdev_svc.o 00:05:37.201 CC test/bdev/bdevio/bdevio.o 00:05:37.201 CC app/fio/nvme/fio_plugin.o 00:05:37.201 CC test/dma/test_dma/test_dma.o 00:05:37.201 CC test/event/scheduler/scheduler.o 00:05:37.201 CC examples/bdev/hello_world/hello_bdev.o 00:05:37.201 CC examples/bdev/bdevperf/bdevperf.o 00:05:37.201 CC examples/blob/hello_world/hello_blob.o 00:05:37.201 CC examples/blob/cli/blobcli.o 00:05:37.201 CC examples/thread/thread/thread_ex.o 00:05:37.201 CC app/fio/bdev/fio_plugin.o 00:05:37.201 CC examples/nvmf/nvmf/nvmf.o 00:05:37.463 LINK spdk_lspci 00:05:37.463 LINK nvmf_tgt 00:05:37.463 LINK rpc_client_test 00:05:37.463 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:37.463 CC test/lvol/esnap/esnap.o 00:05:37.463 LINK spdk_nvme_discover 00:05:37.724 LINK interrupt_tgt 00:05:37.724 CC test/env/mem_callbacks/mem_callbacks.o 00:05:37.724 LINK vhost 00:05:37.724 LINK spdk_trace_record 00:05:37.724 LINK reactor_perf 00:05:37.724 LINK reactor 00:05:37.724 LINK lsvmd 00:05:37.724 LINK jsoncat 00:05:37.724 LINK event_perf 00:05:37.724 CXX test/cpp_headers/iscsi_spec.o 00:05:37.724 LINK histogram_perf 00:05:37.724 LINK vtophys 00:05:37.724 LINK iscsi_tgt 00:05:37.724 LINK led 00:05:37.724 CXX test/cpp_headers/json.o 00:05:37.724 LINK poller_perf 00:05:37.724 LINK boot_partition 00:05:37.724 CXX test/cpp_headers/jsonrpc.o 00:05:37.724 LINK stub 00:05:37.724 LINK connect_stress 00:05:37.724 LINK zipf 00:05:37.724 LINK startup 00:05:37.724 CXX test/cpp_headers/keyring.o 00:05:37.724 CXX test/cpp_headers/keyring_module.o 00:05:37.724 CXX test/cpp_headers/likely.o 00:05:37.724 CXX test/cpp_headers/log.o 00:05:37.724 CXX test/cpp_headers/lvol.o 00:05:37.724 CXX test/cpp_headers/memory.o 00:05:37.724 LINK spdk_tgt 00:05:37.724 CXX test/cpp_headers/mmio.o 00:05:37.724 CXX test/cpp_headers/nbd.o 00:05:37.724 CXX test/cpp_headers/notify.o 00:05:37.724 CXX test/cpp_headers/nvme.o 00:05:37.724 CXX test/cpp_headers/nvme_intel.o 00:05:37.724 CXX test/cpp_headers/nvme_ocssd.o 00:05:37.724 LINK app_repeat 00:05:37.724 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:37.724 CXX test/cpp_headers/nvme_spec.o 00:05:37.724 CXX test/cpp_headers/nvme_zns.o 00:05:37.724 LINK pmr_persistence 00:05:37.724 CXX test/cpp_headers/nvmf_cmd.o 00:05:37.724 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:37.724 LINK err_injection 00:05:37.724 CXX test/cpp_headers/nvmf.o 00:05:37.724 CXX test/cpp_headers/nvmf_spec.o 00:05:37.724 CXX test/cpp_headers/nvmf_transport.o 00:05:37.724 CXX test/cpp_headers/opal.o 00:05:37.724 LINK env_dpdk_post_init 00:05:37.724 CXX test/cpp_headers/opal_spec.o 00:05:37.724 CXX test/cpp_headers/pci_ids.o 00:05:37.724 CXX test/cpp_headers/pipe.o 00:05:37.724 CXX test/cpp_headers/queue.o 00:05:37.724 CXX test/cpp_headers/reduce.o 00:05:37.724 LINK fused_ordering 00:05:37.724 LINK cmb_copy 00:05:37.724 LINK mkfs 00:05:37.724 LINK doorbell_aers 00:05:37.724 CXX test/cpp_headers/rpc.o 00:05:37.724 CXX test/cpp_headers/scheduler.o 00:05:37.724 CXX test/cpp_headers/scsi.o 00:05:37.724 CXX test/cpp_headers/sock.o 00:05:37.724 CXX test/cpp_headers/scsi_spec.o 00:05:37.724 LINK ioat_perf 00:05:37.724 LINK bdev_svc 00:05:37.724 LINK simple_copy 00:05:37.724 LINK hello_world 00:05:37.724 LINK hello_sock 00:05:37.724 CXX test/cpp_headers/stdinc.o 00:05:37.724 LINK verify 00:05:37.724 LINK reserve 00:05:37.724 CXX test/cpp_headers/string.o 00:05:37.724 LINK sgl 00:05:37.724 LINK reset 00:05:37.724 CXX test/cpp_headers/thread.o 00:05:37.724 LINK aer 00:05:37.724 LINK nvme_dp 00:05:37.989 CXX test/cpp_headers/trace.o 00:05:37.989 LINK overhead 00:05:37.989 LINK hotplug 00:05:37.989 CXX test/cpp_headers/trace_parser.o 00:05:37.989 LINK hello_bdev 00:05:37.989 LINK fdp 00:05:37.989 LINK scheduler 00:05:37.989 LINK hello_blob 00:05:37.989 LINK nvme_compliance 00:05:37.989 LINK spdk_dd 00:05:37.989 LINK thread 00:05:37.989 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:37.989 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:37.989 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:37.989 CXX test/cpp_headers/tree.o 00:05:37.989 LINK arbitration 00:05:37.989 LINK idxd_perf 00:05:37.989 CXX test/cpp_headers/ublk.o 00:05:37.989 CXX test/cpp_headers/util.o 00:05:37.989 CXX test/cpp_headers/uuid.o 00:05:37.989 CXX test/cpp_headers/version.o 00:05:37.989 LINK spdk_trace 00:05:37.989 CXX test/cpp_headers/vfio_user_pci.o 00:05:37.989 LINK abort 00:05:37.989 CXX test/cpp_headers/vfio_user_spec.o 00:05:37.989 CXX test/cpp_headers/vhost.o 00:05:37.989 CXX test/cpp_headers/vmd.o 00:05:37.989 LINK pci_ut 00:05:37.989 LINK reconnect 00:05:37.989 CXX test/cpp_headers/xor.o 00:05:37.989 CXX test/cpp_headers/zipf.o 00:05:38.273 LINK accel_perf 00:05:38.273 LINK dif 00:05:38.273 LINK test_dma 00:05:38.273 LINK bdevio 00:05:38.273 LINK nvmf 00:05:38.273 LINK spdk_nvme 00:05:38.273 LINK nvme_manage 00:05:38.273 LINK spdk_bdev 00:05:38.273 LINK blobcli 00:05:38.273 LINK spdk_nvme_perf 00:05:38.273 LINK nvme_fuzz 00:05:38.273 LINK spdk_top 00:05:38.532 LINK spdk_nvme_identify 00:05:38.532 LINK mem_callbacks 00:05:38.532 LINK vhost_fuzz 00:05:38.532 LINK bdevperf 00:05:38.532 LINK memory_ut 00:05:38.792 LINK cuse 00:05:39.361 LINK iscsi_fuzz 00:05:41.271 LINK esnap 00:05:41.271 00:05:41.271 real 0m47.585s 00:05:41.271 user 6m31.990s 00:05:41.271 sys 4m20.498s 00:05:41.271 08:39:58 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:05:41.271 08:39:58 -- common/autotest_common.sh@10 -- $ set +x 00:05:41.271 ************************************ 00:05:41.271 END TEST make 00:05:41.271 ************************************ 00:05:41.531 08:39:58 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:41.531 08:39:58 -- pm/common@30 -- $ signal_monitor_resources TERM 00:05:41.531 08:39:58 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:05:41.531 08:39:58 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.531 08:39:58 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:41.531 08:39:58 -- pm/common@45 -- $ pid=1794255 00:05:41.531 08:39:58 -- pm/common@52 -- $ sudo kill -TERM 1794255 00:05:41.531 08:39:58 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.531 08:39:58 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:41.531 08:39:58 -- pm/common@45 -- $ pid=1794263 00:05:41.531 08:39:58 -- pm/common@52 -- $ sudo kill -TERM 1794263 00:05:41.531 08:39:58 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.531 08:39:58 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:41.531 08:39:58 -- pm/common@45 -- $ pid=1794264 00:05:41.531 08:39:58 -- pm/common@52 -- $ sudo kill -TERM 1794264 00:05:41.531 08:39:58 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.531 08:39:58 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:41.531 08:39:58 -- pm/common@45 -- $ pid=1794256 00:05:41.531 08:39:58 -- pm/common@52 -- $ sudo kill -TERM 1794256 00:05:41.790 08:39:58 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.790 08:39:58 -- nvmf/common.sh@7 -- # uname -s 00:05:41.790 08:39:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.790 08:39:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.790 08:39:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.790 08:39:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.791 08:39:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.791 08:39:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.791 08:39:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.791 08:39:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.791 08:39:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.791 08:39:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.791 08:39:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:41.791 08:39:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:41.791 08:39:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.791 08:39:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.791 08:39:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:41.791 08:39:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.791 08:39:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.791 08:39:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.791 08:39:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.791 08:39:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.791 08:39:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.791 08:39:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.791 08:39:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.791 08:39:58 -- paths/export.sh@5 -- # export PATH 00:05:41.791 08:39:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.791 08:39:58 -- nvmf/common.sh@47 -- # : 0 00:05:41.791 08:39:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.791 08:39:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.791 08:39:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.791 08:39:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.791 08:39:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.791 08:39:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.791 08:39:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.791 08:39:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.791 08:39:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:41.791 08:39:58 -- spdk/autotest.sh@32 -- # uname -s 00:05:41.791 08:39:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:41.791 08:39:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:41.791 08:39:58 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:41.791 08:39:58 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:41.791 08:39:58 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:41.791 08:39:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:41.791 08:39:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:41.791 08:39:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:41.791 08:39:58 -- spdk/autotest.sh@48 -- # udevadm_pid=1854843 00:05:41.791 08:39:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:41.791 08:39:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:41.791 08:39:58 -- pm/common@17 -- # local monitor 00:05:41.791 08:39:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.791 08:39:58 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1854845 00:05:41.791 08:39:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.791 08:39:58 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1854847 00:05:41.791 08:39:58 -- pm/common@21 -- # date +%s 00:05:41.791 08:39:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.791 08:39:58 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1854850 00:05:41.791 08:39:58 -- pm/common@21 -- # date +%s 00:05:41.791 08:39:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:41.791 08:39:58 -- pm/common@21 -- # date +%s 00:05:41.791 08:39:58 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1854854 00:05:41.791 08:39:58 -- pm/common@26 -- # sleep 1 00:05:41.791 08:39:58 -- pm/common@21 -- # date +%s 00:05:41.791 08:39:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714113598 00:05:41.791 08:39:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714113598 00:05:41.791 08:39:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714113598 00:05:41.791 08:39:58 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714113598 00:05:41.791 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714113598_collect-cpu-temp.pm.log 00:05:41.791 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714113598_collect-vmstat.pm.log 00:05:41.791 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714113598_collect-bmc-pm.bmc.pm.log 00:05:41.791 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714113598_collect-cpu-load.pm.log 00:05:42.729 08:39:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:42.729 08:39:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:42.729 08:39:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:42.729 08:39:59 -- common/autotest_common.sh@10 -- # set +x 00:05:42.729 08:39:59 -- spdk/autotest.sh@59 -- # create_test_list 00:05:42.729 08:39:59 -- common/autotest_common.sh@734 -- # xtrace_disable 00:05:42.729 08:39:59 -- common/autotest_common.sh@10 -- # set +x 00:05:42.729 08:39:59 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:42.729 08:39:59 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.729 08:39:59 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.729 08:39:59 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:42.729 08:39:59 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.729 08:39:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:42.729 08:39:59 -- common/autotest_common.sh@1441 -- # uname 00:05:42.729 08:39:59 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:05:42.729 08:39:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:42.729 08:39:59 -- common/autotest_common.sh@1461 -- # uname 00:05:42.729 08:39:59 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:05:42.729 08:39:59 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:42.989 08:39:59 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:42.989 08:39:59 -- spdk/autotest.sh@72 -- # hash lcov 00:05:42.989 08:39:59 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:42.989 08:39:59 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:42.989 --rc lcov_branch_coverage=1 00:05:42.989 --rc lcov_function_coverage=1 00:05:42.989 --rc genhtml_branch_coverage=1 00:05:42.989 --rc genhtml_function_coverage=1 00:05:42.989 --rc genhtml_legend=1 00:05:42.989 --rc geninfo_all_blocks=1 00:05:42.989 ' 00:05:42.989 08:39:59 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:42.989 --rc lcov_branch_coverage=1 00:05:42.989 --rc lcov_function_coverage=1 00:05:42.989 --rc genhtml_branch_coverage=1 00:05:42.989 --rc genhtml_function_coverage=1 00:05:42.989 --rc genhtml_legend=1 00:05:42.989 --rc geninfo_all_blocks=1 00:05:42.989 ' 00:05:42.989 08:39:59 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:42.989 --rc lcov_branch_coverage=1 00:05:42.989 --rc lcov_function_coverage=1 00:05:42.989 --rc genhtml_branch_coverage=1 00:05:42.989 --rc genhtml_function_coverage=1 00:05:42.989 --rc genhtml_legend=1 00:05:42.989 --rc geninfo_all_blocks=1 00:05:42.989 --no-external' 00:05:42.989 08:39:59 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:42.989 --rc lcov_branch_coverage=1 00:05:42.989 --rc lcov_function_coverage=1 00:05:42.989 --rc genhtml_branch_coverage=1 00:05:42.989 --rc genhtml_function_coverage=1 00:05:42.989 --rc genhtml_legend=1 00:05:42.989 --rc geninfo_all_blocks=1 00:05:42.989 --no-external' 00:05:42.989 08:39:59 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:42.989 lcov: LCOV version 1.14 00:05:42.989 08:40:00 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:49.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:49.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:49.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:49.557 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:49.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:49.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:53.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:53.092 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:01.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:01.206 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:01.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:01.206 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:01.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:01.206 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:07.762 08:40:24 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:06:07.762 08:40:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:07.762 08:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:07.762 08:40:24 -- spdk/autotest.sh@91 -- # rm -f 00:06:07.762 08:40:24 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:11.047 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:06:11.047 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:06:11.047 08:40:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:06:11.047 08:40:28 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:11.047 08:40:28 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:11.047 08:40:28 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:11.047 08:40:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:11.047 08:40:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:11.047 08:40:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:11.047 08:40:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:11.047 08:40:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:11.047 08:40:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:06:11.047 08:40:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:06:11.047 08:40:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:06:11.047 08:40:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:06:11.047 08:40:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:06:11.047 08:40:28 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:11.047 No valid GPT data, bailing 00:06:11.047 08:40:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:11.047 08:40:28 -- scripts/common.sh@391 -- # pt= 00:06:11.047 08:40:28 -- scripts/common.sh@392 -- # return 1 00:06:11.047 08:40:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:11.047 1+0 records in 00:06:11.047 1+0 records out 00:06:11.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00617997 s, 170 MB/s 00:06:11.047 08:40:28 -- spdk/autotest.sh@118 -- # sync 00:06:11.047 08:40:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:11.047 08:40:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:11.047 08:40:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:17.606 08:40:34 -- spdk/autotest.sh@124 -- # uname -s 00:06:17.606 08:40:34 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:06:17.606 08:40:34 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:06:17.606 08:40:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.606 08:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.606 08:40:34 -- common/autotest_common.sh@10 -- # set +x 00:06:17.606 ************************************ 00:06:17.606 START TEST setup.sh 00:06:17.606 ************************************ 00:06:17.606 08:40:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:06:17.606 * Looking for test storage... 00:06:17.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:17.606 08:40:34 -- setup/test-setup.sh@10 -- # uname -s 00:06:17.606 08:40:34 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:06:17.606 08:40:34 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:06:17.606 08:40:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.606 08:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.606 08:40:34 -- common/autotest_common.sh@10 -- # set +x 00:06:17.606 ************************************ 00:06:17.606 START TEST acl 00:06:17.606 ************************************ 00:06:17.606 08:40:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:06:17.606 * Looking for test storage... 00:06:17.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:17.606 08:40:34 -- setup/acl.sh@10 -- # get_zoned_devs 00:06:17.606 08:40:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:17.606 08:40:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:17.606 08:40:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:17.606 08:40:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:17.606 08:40:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:17.606 08:40:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:17.606 08:40:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:17.606 08:40:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:17.606 08:40:34 -- setup/acl.sh@12 -- # devs=() 00:06:17.606 08:40:34 -- setup/acl.sh@12 -- # declare -a devs 00:06:17.606 08:40:34 -- setup/acl.sh@13 -- # drivers=() 00:06:17.606 08:40:34 -- setup/acl.sh@13 -- # declare -A drivers 00:06:17.606 08:40:34 -- setup/acl.sh@51 -- # setup reset 00:06:17.606 08:40:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:17.606 08:40:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:21.800 08:40:38 -- setup/acl.sh@52 -- # collect_setup_devs 00:06:21.800 08:40:38 -- setup/acl.sh@16 -- # local dev driver 00:06:21.800 08:40:38 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:21.800 08:40:38 -- setup/acl.sh@15 -- # setup output status 00:06:21.800 08:40:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:21.800 08:40:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:24.332 Hugepages 00:06:24.332 node hugesize free / total 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 00:06:24.332 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.332 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.332 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.332 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.333 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.333 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:06:24.333 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.333 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.333 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.333 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:06:24.333 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.333 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.333 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.333 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:06:24.333 08:40:41 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:06:24.333 08:40:41 -- setup/acl.sh@20 -- # continue 00:06:24.333 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.591 08:40:41 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:06:24.591 08:40:41 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:06:24.591 08:40:41 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:06:24.591 08:40:41 -- setup/acl.sh@22 -- # devs+=("$dev") 00:06:24.591 08:40:41 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:06:24.591 08:40:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:06:24.591 08:40:41 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:06:24.591 08:40:41 -- setup/acl.sh@54 -- # run_test denied denied 00:06:24.591 08:40:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.591 08:40:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.591 08:40:41 -- common/autotest_common.sh@10 -- # set +x 00:06:24.850 ************************************ 00:06:24.850 START TEST denied 00:06:24.850 ************************************ 00:06:24.850 08:40:41 -- common/autotest_common.sh@1111 -- # denied 00:06:24.850 08:40:41 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:06:24.850 08:40:41 -- setup/acl.sh@38 -- # setup output config 00:06:24.850 08:40:41 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:06:24.850 08:40:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:24.850 08:40:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:29.037 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:06:29.038 08:40:45 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:06:29.038 08:40:45 -- setup/acl.sh@28 -- # local dev driver 00:06:29.038 08:40:45 -- setup/acl.sh@30 -- # for dev in "$@" 00:06:29.038 08:40:45 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:06:29.038 08:40:45 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:06:29.038 08:40:45 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:06:29.038 08:40:45 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:06:29.038 08:40:45 -- setup/acl.sh@41 -- # setup reset 00:06:29.038 08:40:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:29.038 08:40:45 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:33.230 00:06:33.230 real 0m8.286s 00:06:33.230 user 0m2.645s 00:06:33.230 sys 0m5.031s 00:06:33.230 08:40:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.230 08:40:50 -- common/autotest_common.sh@10 -- # set +x 00:06:33.230 ************************************ 00:06:33.230 END TEST denied 00:06:33.230 ************************************ 00:06:33.230 08:40:50 -- setup/acl.sh@55 -- # run_test allowed allowed 00:06:33.230 08:40:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:33.230 08:40:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.230 08:40:50 -- common/autotest_common.sh@10 -- # set +x 00:06:33.230 ************************************ 00:06:33.230 START TEST allowed 00:06:33.230 ************************************ 00:06:33.230 08:40:50 -- common/autotest_common.sh@1111 -- # allowed 00:06:33.230 08:40:50 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:06:33.230 08:40:50 -- setup/acl.sh@45 -- # setup output config 00:06:33.230 08:40:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:33.230 08:40:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:33.230 08:40:50 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:06:38.508 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:38.508 08:40:55 -- setup/acl.sh@47 -- # verify 00:06:38.508 08:40:55 -- setup/acl.sh@28 -- # local dev driver 00:06:38.508 08:40:55 -- setup/acl.sh@48 -- # setup reset 00:06:38.508 08:40:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:38.508 08:40:55 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:42.742 00:06:42.742 real 0m8.920s 00:06:42.742 user 0m2.538s 00:06:42.742 sys 0m4.979s 00:06:42.742 08:40:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.742 08:40:59 -- common/autotest_common.sh@10 -- # set +x 00:06:42.742 ************************************ 00:06:42.742 END TEST allowed 00:06:42.742 ************************************ 00:06:42.742 00:06:42.742 real 0m24.666s 00:06:42.742 user 0m7.779s 00:06:42.742 sys 0m15.054s 00:06:42.742 08:40:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.742 08:40:59 -- common/autotest_common.sh@10 -- # set +x 00:06:42.742 ************************************ 00:06:42.742 END TEST acl 00:06:42.742 ************************************ 00:06:42.742 08:40:59 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:06:42.742 08:40:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.742 08:40:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.743 08:40:59 -- common/autotest_common.sh@10 -- # set +x 00:06:42.743 ************************************ 00:06:42.743 START TEST hugepages 00:06:42.743 ************************************ 00:06:42.743 08:40:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:06:42.743 * Looking for test storage... 00:06:42.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:42.743 08:40:59 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:42.743 08:40:59 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:42.743 08:40:59 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:42.743 08:40:59 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:42.743 08:40:59 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:42.743 08:40:59 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:42.743 08:40:59 -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:42.743 08:40:59 -- setup/common.sh@18 -- # local node= 00:06:42.743 08:40:59 -- setup/common.sh@19 -- # local var val 00:06:42.743 08:40:59 -- setup/common.sh@20 -- # local mem_f mem 00:06:42.743 08:40:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:42.743 08:40:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:42.743 08:40:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:42.743 08:40:59 -- setup/common.sh@28 -- # mapfile -t mem 00:06:42.743 08:40:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 37843380 kB' 'MemAvailable: 42475548 kB' 'Buffers: 3728 kB' 'Cached: 14025520 kB' 'SwapCached: 0 kB' 'Active: 10950264 kB' 'Inactive: 3665332 kB' 'Active(anon): 9839108 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 589720 kB' 'Mapped: 214560 kB' 'Shmem: 9252760 kB' 'KReclaimable: 503388 kB' 'Slab: 1162056 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 658668 kB' 'KernelStack: 22096 kB' 'PageTables: 9856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 11252792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217004 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.743 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.743 08:40:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # continue 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # IFS=': ' 00:06:42.744 08:40:59 -- setup/common.sh@31 -- # read -r var val _ 00:06:42.744 08:40:59 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:42.744 08:40:59 -- setup/common.sh@33 -- # echo 2048 00:06:42.744 08:40:59 -- setup/common.sh@33 -- # return 0 00:06:42.744 08:40:59 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:42.744 08:40:59 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:42.744 08:40:59 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:42.744 08:40:59 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:42.744 08:40:59 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:42.744 08:40:59 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:42.744 08:40:59 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:42.744 08:40:59 -- setup/hugepages.sh@207 -- # get_nodes 00:06:42.744 08:40:59 -- setup/hugepages.sh@27 -- # local node 00:06:42.744 08:40:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:42.744 08:40:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:42.744 08:40:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:42.744 08:40:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:42.744 08:40:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:42.744 08:40:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:42.744 08:40:59 -- setup/hugepages.sh@208 -- # clear_hp 00:06:42.744 08:40:59 -- setup/hugepages.sh@37 -- # local node hp 00:06:42.744 08:40:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:42.744 08:40:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:42.744 08:40:59 -- setup/hugepages.sh@41 -- # echo 0 00:06:42.744 08:40:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:42.744 08:40:59 -- setup/hugepages.sh@41 -- # echo 0 00:06:42.744 08:40:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:42.744 08:40:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:42.744 08:40:59 -- setup/hugepages.sh@41 -- # echo 0 00:06:42.744 08:40:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:42.744 08:40:59 -- setup/hugepages.sh@41 -- # echo 0 00:06:42.744 08:40:59 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:42.744 08:40:59 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:42.744 08:40:59 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:42.744 08:40:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:42.744 08:40:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.744 08:40:59 -- common/autotest_common.sh@10 -- # set +x 00:06:42.744 ************************************ 00:06:42.744 START TEST default_setup 00:06:42.744 ************************************ 00:06:42.744 08:40:59 -- common/autotest_common.sh@1111 -- # default_setup 00:06:42.744 08:40:59 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:42.744 08:40:59 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:42.744 08:40:59 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:42.744 08:40:59 -- setup/hugepages.sh@51 -- # shift 00:06:42.744 08:40:59 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:42.744 08:40:59 -- setup/hugepages.sh@52 -- # local node_ids 00:06:42.744 08:40:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:42.744 08:40:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:42.744 08:40:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:42.744 08:40:59 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:42.744 08:40:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:42.744 08:40:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:42.744 08:40:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:42.744 08:40:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:42.744 08:40:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:42.744 08:40:59 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:42.744 08:40:59 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:42.744 08:40:59 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:42.744 08:40:59 -- setup/hugepages.sh@73 -- # return 0 00:06:42.744 08:40:59 -- setup/hugepages.sh@137 -- # setup output 00:06:42.744 08:40:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:42.744 08:40:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:46.027 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:46.027 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:47.406 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:47.406 08:41:04 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:47.406 08:41:04 -- setup/hugepages.sh@89 -- # local node 00:06:47.406 08:41:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:47.406 08:41:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:47.406 08:41:04 -- setup/hugepages.sh@92 -- # local surp 00:06:47.406 08:41:04 -- setup/hugepages.sh@93 -- # local resv 00:06:47.406 08:41:04 -- setup/hugepages.sh@94 -- # local anon 00:06:47.406 08:41:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:47.406 08:41:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:47.406 08:41:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:47.406 08:41:04 -- setup/common.sh@18 -- # local node= 00:06:47.406 08:41:04 -- setup/common.sh@19 -- # local var val 00:06:47.406 08:41:04 -- setup/common.sh@20 -- # local mem_f mem 00:06:47.406 08:41:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.406 08:41:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.406 08:41:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.406 08:41:04 -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.406 08:41:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39976400 kB' 'MemAvailable: 44608568 kB' 'Buffers: 3728 kB' 'Cached: 14025656 kB' 'SwapCached: 0 kB' 'Active: 10971368 kB' 'Inactive: 3665332 kB' 'Active(anon): 9860212 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610256 kB' 'Mapped: 215260 kB' 'Shmem: 9252896 kB' 'KReclaimable: 503388 kB' 'Slab: 1159508 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 656120 kB' 'KernelStack: 22400 kB' 'PageTables: 10568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11263540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217184 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.406 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.406 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:47.407 08:41:04 -- setup/common.sh@33 -- # echo 0 00:06:47.407 08:41:04 -- setup/common.sh@33 -- # return 0 00:06:47.407 08:41:04 -- setup/hugepages.sh@97 -- # anon=0 00:06:47.407 08:41:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:47.407 08:41:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:47.407 08:41:04 -- setup/common.sh@18 -- # local node= 00:06:47.407 08:41:04 -- setup/common.sh@19 -- # local var val 00:06:47.407 08:41:04 -- setup/common.sh@20 -- # local mem_f mem 00:06:47.407 08:41:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.407 08:41:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.407 08:41:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.407 08:41:04 -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.407 08:41:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39981668 kB' 'MemAvailable: 44613836 kB' 'Buffers: 3728 kB' 'Cached: 14025660 kB' 'SwapCached: 0 kB' 'Active: 10965428 kB' 'Inactive: 3665332 kB' 'Active(anon): 9854272 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604760 kB' 'Mapped: 214744 kB' 'Shmem: 9252900 kB' 'KReclaimable: 503388 kB' 'Slab: 1159508 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 656120 kB' 'KernelStack: 22368 kB' 'PageTables: 9812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11257432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217132 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.407 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.407 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.408 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.408 08:41:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.409 08:41:04 -- setup/common.sh@33 -- # echo 0 00:06:47.409 08:41:04 -- setup/common.sh@33 -- # return 0 00:06:47.409 08:41:04 -- setup/hugepages.sh@99 -- # surp=0 00:06:47.409 08:41:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:47.409 08:41:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:47.409 08:41:04 -- setup/common.sh@18 -- # local node= 00:06:47.409 08:41:04 -- setup/common.sh@19 -- # local var val 00:06:47.409 08:41:04 -- setup/common.sh@20 -- # local mem_f mem 00:06:47.409 08:41:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.409 08:41:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.409 08:41:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.409 08:41:04 -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.409 08:41:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39983188 kB' 'MemAvailable: 44615356 kB' 'Buffers: 3728 kB' 'Cached: 14025668 kB' 'SwapCached: 0 kB' 'Active: 10965332 kB' 'Inactive: 3665332 kB' 'Active(anon): 9854176 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604624 kB' 'Mapped: 214744 kB' 'Shmem: 9252908 kB' 'KReclaimable: 503388 kB' 'Slab: 1159704 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 656316 kB' 'KernelStack: 22288 kB' 'PageTables: 9876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11257448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217132 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.409 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.409 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.670 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.670 08:41:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:47.670 08:41:04 -- setup/common.sh@33 -- # echo 0 00:06:47.670 08:41:04 -- setup/common.sh@33 -- # return 0 00:06:47.670 08:41:04 -- setup/hugepages.sh@100 -- # resv=0 00:06:47.670 08:41:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:47.670 nr_hugepages=1024 00:06:47.670 08:41:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:47.670 resv_hugepages=0 00:06:47.670 08:41:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:47.670 surplus_hugepages=0 00:06:47.670 08:41:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:47.670 anon_hugepages=0 00:06:47.670 08:41:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:47.670 08:41:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:47.670 08:41:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:47.670 08:41:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:47.670 08:41:04 -- setup/common.sh@18 -- # local node= 00:06:47.670 08:41:04 -- setup/common.sh@19 -- # local var val 00:06:47.670 08:41:04 -- setup/common.sh@20 -- # local mem_f mem 00:06:47.670 08:41:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.670 08:41:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:47.671 08:41:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:47.671 08:41:04 -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.671 08:41:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39984628 kB' 'MemAvailable: 44616796 kB' 'Buffers: 3728 kB' 'Cached: 14025684 kB' 'SwapCached: 0 kB' 'Active: 10965540 kB' 'Inactive: 3665332 kB' 'Active(anon): 9854384 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604280 kB' 'Mapped: 214744 kB' 'Shmem: 9252924 kB' 'KReclaimable: 503388 kB' 'Slab: 1159704 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 656316 kB' 'KernelStack: 22224 kB' 'PageTables: 9924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11257460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217164 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.671 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.671 08:41:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:47.672 08:41:04 -- setup/common.sh@33 -- # echo 1024 00:06:47.672 08:41:04 -- setup/common.sh@33 -- # return 0 00:06:47.672 08:41:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:47.672 08:41:04 -- setup/hugepages.sh@112 -- # get_nodes 00:06:47.672 08:41:04 -- setup/hugepages.sh@27 -- # local node 00:06:47.672 08:41:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:47.672 08:41:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:47.672 08:41:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:47.672 08:41:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:47.672 08:41:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:47.672 08:41:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:47.672 08:41:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:47.672 08:41:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:47.672 08:41:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:47.672 08:41:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:47.672 08:41:04 -- setup/common.sh@18 -- # local node=0 00:06:47.672 08:41:04 -- setup/common.sh@19 -- # local var val 00:06:47.672 08:41:04 -- setup/common.sh@20 -- # local mem_f mem 00:06:47.672 08:41:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:47.672 08:41:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:47.672 08:41:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:47.672 08:41:04 -- setup/common.sh@28 -- # mapfile -t mem 00:06:47.672 08:41:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18622204 kB' 'MemUsed: 14016936 kB' 'SwapCached: 0 kB' 'Active: 6802896 kB' 'Inactive: 3291180 kB' 'Active(anon): 6268764 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3291180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9690340 kB' 'Mapped: 123148 kB' 'AnonPages: 406924 kB' 'Shmem: 5865028 kB' 'KernelStack: 13144 kB' 'PageTables: 5636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 335552 kB' 'Slab: 671232 kB' 'SReclaimable: 335552 kB' 'SUnreclaim: 335680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.672 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.672 08:41:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # continue 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # IFS=': ' 00:06:47.673 08:41:04 -- setup/common.sh@31 -- # read -r var val _ 00:06:47.673 08:41:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:47.673 08:41:04 -- setup/common.sh@33 -- # echo 0 00:06:47.673 08:41:04 -- setup/common.sh@33 -- # return 0 00:06:47.673 08:41:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:47.673 08:41:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:47.673 08:41:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:47.673 08:41:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:47.673 08:41:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:47.673 node0=1024 expecting 1024 00:06:47.673 08:41:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:47.673 00:06:47.673 real 0m4.912s 00:06:47.673 user 0m1.189s 00:06:47.673 sys 0m2.153s 00:06:47.673 08:41:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.673 08:41:04 -- common/autotest_common.sh@10 -- # set +x 00:06:47.673 ************************************ 00:06:47.673 END TEST default_setup 00:06:47.673 ************************************ 00:06:47.673 08:41:04 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:47.673 08:41:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.673 08:41:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.673 08:41:04 -- common/autotest_common.sh@10 -- # set +x 00:06:47.673 ************************************ 00:06:47.673 START TEST per_node_1G_alloc 00:06:47.673 ************************************ 00:06:47.673 08:41:04 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:06:47.673 08:41:04 -- setup/hugepages.sh@143 -- # local IFS=, 00:06:47.673 08:41:04 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:06:47.673 08:41:04 -- setup/hugepages.sh@49 -- # local size=1048576 00:06:47.673 08:41:04 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:06:47.673 08:41:04 -- setup/hugepages.sh@51 -- # shift 00:06:47.673 08:41:04 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:06:47.673 08:41:04 -- setup/hugepages.sh@52 -- # local node_ids 00:06:47.673 08:41:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:47.673 08:41:04 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:47.673 08:41:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:06:47.673 08:41:04 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:06:47.673 08:41:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:47.673 08:41:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:47.673 08:41:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:47.673 08:41:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:47.673 08:41:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:47.673 08:41:04 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:06:47.673 08:41:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:47.673 08:41:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:47.673 08:41:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:47.673 08:41:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:47.673 08:41:04 -- setup/hugepages.sh@73 -- # return 0 00:06:47.673 08:41:04 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:47.673 08:41:04 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:06:47.673 08:41:04 -- setup/hugepages.sh@146 -- # setup output 00:06:47.673 08:41:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:47.673 08:41:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:50.960 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:50.960 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:50.960 08:41:07 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:06:50.960 08:41:07 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:50.960 08:41:07 -- setup/hugepages.sh@89 -- # local node 00:06:50.960 08:41:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:50.960 08:41:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:50.960 08:41:07 -- setup/hugepages.sh@92 -- # local surp 00:06:50.960 08:41:07 -- setup/hugepages.sh@93 -- # local resv 00:06:50.960 08:41:07 -- setup/hugepages.sh@94 -- # local anon 00:06:50.960 08:41:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:50.960 08:41:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:50.960 08:41:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:50.960 08:41:07 -- setup/common.sh@18 -- # local node= 00:06:50.960 08:41:07 -- setup/common.sh@19 -- # local var val 00:06:50.960 08:41:07 -- setup/common.sh@20 -- # local mem_f mem 00:06:50.960 08:41:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.960 08:41:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.960 08:41:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.960 08:41:07 -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.960 08:41:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.960 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.960 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.960 08:41:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40006176 kB' 'MemAvailable: 44638344 kB' 'Buffers: 3728 kB' 'Cached: 14025772 kB' 'SwapCached: 0 kB' 'Active: 10967216 kB' 'Inactive: 3665332 kB' 'Active(anon): 9856060 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606292 kB' 'Mapped: 214804 kB' 'Shmem: 9253012 kB' 'KReclaimable: 503388 kB' 'Slab: 1160008 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 656620 kB' 'KernelStack: 22320 kB' 'PageTables: 10096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11258056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217276 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:50.960 08:41:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.960 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.960 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.960 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.960 08:41:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.961 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.961 08:41:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:50.961 08:41:07 -- setup/common.sh@33 -- # echo 0 00:06:50.961 08:41:07 -- setup/common.sh@33 -- # return 0 00:06:50.961 08:41:07 -- setup/hugepages.sh@97 -- # anon=0 00:06:50.961 08:41:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:50.961 08:41:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:50.961 08:41:07 -- setup/common.sh@18 -- # local node= 00:06:50.961 08:41:07 -- setup/common.sh@19 -- # local var val 00:06:50.961 08:41:07 -- setup/common.sh@20 -- # local mem_f mem 00:06:50.961 08:41:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.961 08:41:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.961 08:41:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.962 08:41:07 -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.962 08:41:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40007632 kB' 'MemAvailable: 44639800 kB' 'Buffers: 3728 kB' 'Cached: 14025776 kB' 'SwapCached: 0 kB' 'Active: 10966052 kB' 'Inactive: 3665332 kB' 'Active(anon): 9854896 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605140 kB' 'Mapped: 214772 kB' 'Shmem: 9253016 kB' 'KReclaimable: 503388 kB' 'Slab: 1160016 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 656628 kB' 'KernelStack: 21936 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11256552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217052 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.962 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.962 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.963 08:41:07 -- setup/common.sh@33 -- # echo 0 00:06:50.963 08:41:07 -- setup/common.sh@33 -- # return 0 00:06:50.963 08:41:07 -- setup/hugepages.sh@99 -- # surp=0 00:06:50.963 08:41:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:50.963 08:41:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:50.963 08:41:07 -- setup/common.sh@18 -- # local node= 00:06:50.963 08:41:07 -- setup/common.sh@19 -- # local var val 00:06:50.963 08:41:07 -- setup/common.sh@20 -- # local mem_f mem 00:06:50.963 08:41:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.963 08:41:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.963 08:41:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.963 08:41:07 -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.963 08:41:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40007604 kB' 'MemAvailable: 44639772 kB' 'Buffers: 3728 kB' 'Cached: 14025776 kB' 'SwapCached: 0 kB' 'Active: 10967124 kB' 'Inactive: 3665332 kB' 'Active(anon): 9855968 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606292 kB' 'Mapped: 214780 kB' 'Shmem: 9253016 kB' 'KReclaimable: 503388 kB' 'Slab: 1160020 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 656632 kB' 'KernelStack: 22208 kB' 'PageTables: 9536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11258080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217148 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.963 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.963 08:41:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.964 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:50.964 08:41:07 -- setup/common.sh@33 -- # echo 0 00:06:50.964 08:41:07 -- setup/common.sh@33 -- # return 0 00:06:50.964 08:41:07 -- setup/hugepages.sh@100 -- # resv=0 00:06:50.964 08:41:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:50.964 nr_hugepages=1024 00:06:50.964 08:41:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:50.964 resv_hugepages=0 00:06:50.964 08:41:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:50.964 surplus_hugepages=0 00:06:50.964 08:41:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:50.964 anon_hugepages=0 00:06:50.964 08:41:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:50.964 08:41:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:50.964 08:41:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:50.964 08:41:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:50.964 08:41:07 -- setup/common.sh@18 -- # local node= 00:06:50.964 08:41:07 -- setup/common.sh@19 -- # local var val 00:06:50.964 08:41:07 -- setup/common.sh@20 -- # local mem_f mem 00:06:50.964 08:41:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.964 08:41:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:50.964 08:41:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:50.964 08:41:07 -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.964 08:41:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.964 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40014400 kB' 'MemAvailable: 44646568 kB' 'Buffers: 3728 kB' 'Cached: 14025792 kB' 'SwapCached: 0 kB' 'Active: 10964472 kB' 'Inactive: 3665332 kB' 'Active(anon): 9853316 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 603548 kB' 'Mapped: 213556 kB' 'Shmem: 9253032 kB' 'KReclaimable: 503388 kB' 'Slab: 1160020 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 656632 kB' 'KernelStack: 22208 kB' 'PageTables: 9700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11250656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217148 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.965 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.965 08:41:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:50.966 08:41:07 -- setup/common.sh@33 -- # echo 1024 00:06:50.966 08:41:07 -- setup/common.sh@33 -- # return 0 00:06:50.966 08:41:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:50.966 08:41:07 -- setup/hugepages.sh@112 -- # get_nodes 00:06:50.966 08:41:07 -- setup/hugepages.sh@27 -- # local node 00:06:50.966 08:41:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:50.966 08:41:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:50.966 08:41:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:50.966 08:41:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:50.966 08:41:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:50.966 08:41:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:50.966 08:41:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:50.966 08:41:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:50.966 08:41:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:50.966 08:41:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:50.966 08:41:07 -- setup/common.sh@18 -- # local node=0 00:06:50.966 08:41:07 -- setup/common.sh@19 -- # local var val 00:06:50.966 08:41:07 -- setup/common.sh@20 -- # local mem_f mem 00:06:50.966 08:41:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.966 08:41:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:50.966 08:41:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:50.966 08:41:07 -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.966 08:41:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19712216 kB' 'MemUsed: 12926924 kB' 'SwapCached: 0 kB' 'Active: 6801976 kB' 'Inactive: 3291180 kB' 'Active(anon): 6267844 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3291180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9690428 kB' 'Mapped: 121936 kB' 'AnonPages: 405936 kB' 'Shmem: 5865116 kB' 'KernelStack: 13064 kB' 'PageTables: 5332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 335552 kB' 'Slab: 671548 kB' 'SReclaimable: 335552 kB' 'SUnreclaim: 335996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.966 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.966 08:41:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@33 -- # echo 0 00:06:50.967 08:41:07 -- setup/common.sh@33 -- # return 0 00:06:50.967 08:41:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:50.967 08:41:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:50.967 08:41:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:50.967 08:41:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:50.967 08:41:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:50.967 08:41:07 -- setup/common.sh@18 -- # local node=1 00:06:50.967 08:41:07 -- setup/common.sh@19 -- # local var val 00:06:50.967 08:41:07 -- setup/common.sh@20 -- # local mem_f mem 00:06:50.967 08:41:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:50.967 08:41:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:50.967 08:41:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:50.967 08:41:07 -- setup/common.sh@28 -- # mapfile -t mem 00:06:50.967 08:41:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 20300740 kB' 'MemUsed: 7355336 kB' 'SwapCached: 0 kB' 'Active: 4163124 kB' 'Inactive: 374152 kB' 'Active(anon): 3586100 kB' 'Inactive(anon): 0 kB' 'Active(file): 577024 kB' 'Inactive(file): 374152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4339116 kB' 'Mapped: 91620 kB' 'AnonPages: 198176 kB' 'Shmem: 3387940 kB' 'KernelStack: 9048 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167836 kB' 'Slab: 488440 kB' 'SReclaimable: 167836 kB' 'SUnreclaim: 320604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.967 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.967 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # continue 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:06:50.968 08:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:06:50.968 08:41:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:50.968 08:41:07 -- setup/common.sh@33 -- # echo 0 00:06:50.968 08:41:07 -- setup/common.sh@33 -- # return 0 00:06:50.968 08:41:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:50.968 08:41:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:50.968 08:41:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:50.968 08:41:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:50.968 08:41:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:50.968 node0=512 expecting 512 00:06:50.968 08:41:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:50.968 08:41:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:50.968 08:41:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:50.968 08:41:07 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:50.968 node1=512 expecting 512 00:06:50.968 08:41:07 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:50.968 00:06:50.968 real 0m3.050s 00:06:50.968 user 0m0.968s 00:06:50.968 sys 0m1.983s 00:06:50.968 08:41:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.968 08:41:07 -- common/autotest_common.sh@10 -- # set +x 00:06:50.968 ************************************ 00:06:50.968 END TEST per_node_1G_alloc 00:06:50.968 ************************************ 00:06:50.968 08:41:07 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:50.968 08:41:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.968 08:41:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.968 08:41:07 -- common/autotest_common.sh@10 -- # set +x 00:06:50.968 ************************************ 00:06:50.968 START TEST even_2G_alloc 00:06:50.968 ************************************ 00:06:50.968 08:41:08 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:06:50.968 08:41:08 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:50.968 08:41:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:50.968 08:41:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:50.968 08:41:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:50.968 08:41:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:50.968 08:41:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:50.968 08:41:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:50.968 08:41:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:50.968 08:41:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:50.968 08:41:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:50.968 08:41:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:50.968 08:41:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:50.968 08:41:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:50.968 08:41:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:50.968 08:41:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:50.968 08:41:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:50.968 08:41:08 -- setup/hugepages.sh@83 -- # : 512 00:06:50.968 08:41:08 -- setup/hugepages.sh@84 -- # : 1 00:06:50.968 08:41:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:50.968 08:41:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:50.968 08:41:08 -- setup/hugepages.sh@83 -- # : 0 00:06:50.968 08:41:08 -- setup/hugepages.sh@84 -- # : 0 00:06:50.968 08:41:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:50.968 08:41:08 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:50.968 08:41:08 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:50.968 08:41:08 -- setup/hugepages.sh@153 -- # setup output 00:06:50.968 08:41:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:50.968 08:41:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:54.252 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:54.252 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:54.252 08:41:11 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:54.252 08:41:11 -- setup/hugepages.sh@89 -- # local node 00:06:54.252 08:41:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:54.252 08:41:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:54.252 08:41:11 -- setup/hugepages.sh@92 -- # local surp 00:06:54.252 08:41:11 -- setup/hugepages.sh@93 -- # local resv 00:06:54.252 08:41:11 -- setup/hugepages.sh@94 -- # local anon 00:06:54.252 08:41:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:54.252 08:41:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:54.252 08:41:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:54.252 08:41:11 -- setup/common.sh@18 -- # local node= 00:06:54.252 08:41:11 -- setup/common.sh@19 -- # local var val 00:06:54.252 08:41:11 -- setup/common.sh@20 -- # local mem_f mem 00:06:54.252 08:41:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.252 08:41:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.252 08:41:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.252 08:41:11 -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.252 08:41:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39975488 kB' 'MemAvailable: 44607656 kB' 'Buffers: 3728 kB' 'Cached: 14025896 kB' 'SwapCached: 0 kB' 'Active: 10971348 kB' 'Inactive: 3665332 kB' 'Active(anon): 9860192 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610492 kB' 'Mapped: 214412 kB' 'Shmem: 9253136 kB' 'KReclaimable: 503388 kB' 'Slab: 1160400 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 657012 kB' 'KernelStack: 22048 kB' 'PageTables: 9352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11254396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217088 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.252 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.252 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:54.253 08:41:11 -- setup/common.sh@33 -- # echo 0 00:06:54.253 08:41:11 -- setup/common.sh@33 -- # return 0 00:06:54.253 08:41:11 -- setup/hugepages.sh@97 -- # anon=0 00:06:54.253 08:41:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:54.253 08:41:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:54.253 08:41:11 -- setup/common.sh@18 -- # local node= 00:06:54.253 08:41:11 -- setup/common.sh@19 -- # local var val 00:06:54.253 08:41:11 -- setup/common.sh@20 -- # local mem_f mem 00:06:54.253 08:41:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.253 08:41:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.253 08:41:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.253 08:41:11 -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.253 08:41:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39977436 kB' 'MemAvailable: 44609604 kB' 'Buffers: 3728 kB' 'Cached: 14025896 kB' 'SwapCached: 0 kB' 'Active: 10965992 kB' 'Inactive: 3665332 kB' 'Active(anon): 9854836 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605212 kB' 'Mapped: 213588 kB' 'Shmem: 9253136 kB' 'KReclaimable: 503388 kB' 'Slab: 1160400 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 657012 kB' 'KernelStack: 22112 kB' 'PageTables: 9596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11250680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217068 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.253 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.253 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.254 08:41:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.254 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.254 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.254 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.254 08:41:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.254 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.254 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.254 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.254 08:41:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.254 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.254 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.254 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.254 08:41:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.254 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.254 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.254 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.254 08:41:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.254 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.516 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.516 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.517 08:41:11 -- setup/common.sh@33 -- # echo 0 00:06:54.517 08:41:11 -- setup/common.sh@33 -- # return 0 00:06:54.517 08:41:11 -- setup/hugepages.sh@99 -- # surp=0 00:06:54.517 08:41:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:54.517 08:41:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:54.517 08:41:11 -- setup/common.sh@18 -- # local node= 00:06:54.517 08:41:11 -- setup/common.sh@19 -- # local var val 00:06:54.517 08:41:11 -- setup/common.sh@20 -- # local mem_f mem 00:06:54.517 08:41:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.517 08:41:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.517 08:41:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.517 08:41:11 -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.517 08:41:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39978316 kB' 'MemAvailable: 44610484 kB' 'Buffers: 3728 kB' 'Cached: 14025908 kB' 'SwapCached: 0 kB' 'Active: 10965280 kB' 'Inactive: 3665332 kB' 'Active(anon): 9854124 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604480 kB' 'Mapped: 213588 kB' 'Shmem: 9253148 kB' 'KReclaimable: 503388 kB' 'Slab: 1160400 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 657012 kB' 'KernelStack: 22032 kB' 'PageTables: 9304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11248300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217020 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.517 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.517 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:54.518 08:41:11 -- setup/common.sh@33 -- # echo 0 00:06:54.518 08:41:11 -- setup/common.sh@33 -- # return 0 00:06:54.518 08:41:11 -- setup/hugepages.sh@100 -- # resv=0 00:06:54.518 08:41:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:54.518 nr_hugepages=1024 00:06:54.518 08:41:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:54.518 resv_hugepages=0 00:06:54.518 08:41:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:54.518 surplus_hugepages=0 00:06:54.518 08:41:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:54.518 anon_hugepages=0 00:06:54.518 08:41:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:54.518 08:41:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:54.518 08:41:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:54.518 08:41:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:54.518 08:41:11 -- setup/common.sh@18 -- # local node= 00:06:54.518 08:41:11 -- setup/common.sh@19 -- # local var val 00:06:54.518 08:41:11 -- setup/common.sh@20 -- # local mem_f mem 00:06:54.518 08:41:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.518 08:41:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:54.518 08:41:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:54.518 08:41:11 -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.518 08:41:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39978064 kB' 'MemAvailable: 44610232 kB' 'Buffers: 3728 kB' 'Cached: 14025924 kB' 'SwapCached: 0 kB' 'Active: 10965380 kB' 'Inactive: 3665332 kB' 'Active(anon): 9854224 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 604524 kB' 'Mapped: 213588 kB' 'Shmem: 9253164 kB' 'KReclaimable: 503388 kB' 'Slab: 1160400 kB' 'SReclaimable: 503388 kB' 'SUnreclaim: 657012 kB' 'KernelStack: 22064 kB' 'PageTables: 9412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11248316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217036 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.518 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.518 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.519 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.519 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:54.520 08:41:11 -- setup/common.sh@33 -- # echo 1024 00:06:54.520 08:41:11 -- setup/common.sh@33 -- # return 0 00:06:54.520 08:41:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:54.520 08:41:11 -- setup/hugepages.sh@112 -- # get_nodes 00:06:54.520 08:41:11 -- setup/hugepages.sh@27 -- # local node 00:06:54.520 08:41:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:54.520 08:41:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:54.520 08:41:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:54.520 08:41:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:54.520 08:41:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:54.520 08:41:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:54.520 08:41:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:54.520 08:41:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:54.520 08:41:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:54.520 08:41:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:54.520 08:41:11 -- setup/common.sh@18 -- # local node=0 00:06:54.520 08:41:11 -- setup/common.sh@19 -- # local var val 00:06:54.520 08:41:11 -- setup/common.sh@20 -- # local mem_f mem 00:06:54.520 08:41:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.520 08:41:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:54.520 08:41:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:54.520 08:41:11 -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.520 08:41:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19699868 kB' 'MemUsed: 12939272 kB' 'SwapCached: 0 kB' 'Active: 6803940 kB' 'Inactive: 3291180 kB' 'Active(anon): 6269808 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3291180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9690520 kB' 'Mapped: 121940 kB' 'AnonPages: 407960 kB' 'Shmem: 5865208 kB' 'KernelStack: 13080 kB' 'PageTables: 5336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 335552 kB' 'Slab: 671972 kB' 'SReclaimable: 335552 kB' 'SUnreclaim: 336420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.520 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.520 08:41:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@33 -- # echo 0 00:06:54.521 08:41:11 -- setup/common.sh@33 -- # return 0 00:06:54.521 08:41:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:54.521 08:41:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:54.521 08:41:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:54.521 08:41:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:54.521 08:41:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:54.521 08:41:11 -- setup/common.sh@18 -- # local node=1 00:06:54.521 08:41:11 -- setup/common.sh@19 -- # local var val 00:06:54.521 08:41:11 -- setup/common.sh@20 -- # local mem_f mem 00:06:54.521 08:41:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:54.521 08:41:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:54.521 08:41:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:54.521 08:41:11 -- setup/common.sh@28 -- # mapfile -t mem 00:06:54.521 08:41:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 20277692 kB' 'MemUsed: 7378384 kB' 'SwapCached: 0 kB' 'Active: 4161660 kB' 'Inactive: 374152 kB' 'Active(anon): 3584636 kB' 'Inactive(anon): 0 kB' 'Active(file): 577024 kB' 'Inactive(file): 374152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4339132 kB' 'Mapped: 91648 kB' 'AnonPages: 196808 kB' 'Shmem: 3387956 kB' 'KernelStack: 9000 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167836 kB' 'Slab: 488428 kB' 'SReclaimable: 167836 kB' 'SUnreclaim: 320592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.521 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.521 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # continue 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # IFS=': ' 00:06:54.522 08:41:11 -- setup/common.sh@31 -- # read -r var val _ 00:06:54.522 08:41:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:54.522 08:41:11 -- setup/common.sh@33 -- # echo 0 00:06:54.522 08:41:11 -- setup/common.sh@33 -- # return 0 00:06:54.522 08:41:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:54.522 08:41:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:54.522 08:41:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:54.522 08:41:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:54.522 08:41:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:54.522 node0=512 expecting 512 00:06:54.522 08:41:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:54.522 08:41:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:54.522 08:41:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:54.522 08:41:11 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:54.522 node1=512 expecting 512 00:06:54.522 08:41:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:54.522 00:06:54.522 real 0m3.525s 00:06:54.522 user 0m1.287s 00:06:54.522 sys 0m2.277s 00:06:54.522 08:41:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:54.522 08:41:11 -- common/autotest_common.sh@10 -- # set +x 00:06:54.522 ************************************ 00:06:54.522 END TEST even_2G_alloc 00:06:54.522 ************************************ 00:06:54.522 08:41:11 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:54.522 08:41:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:54.522 08:41:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.522 08:41:11 -- common/autotest_common.sh@10 -- # set +x 00:06:54.781 ************************************ 00:06:54.781 START TEST odd_alloc 00:06:54.781 ************************************ 00:06:54.781 08:41:11 -- common/autotest_common.sh@1111 -- # odd_alloc 00:06:54.781 08:41:11 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:54.781 08:41:11 -- setup/hugepages.sh@49 -- # local size=2098176 00:06:54.781 08:41:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:54.781 08:41:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:54.781 08:41:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:54.781 08:41:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:54.781 08:41:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:54.781 08:41:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:54.781 08:41:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:54.781 08:41:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:54.781 08:41:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:54.781 08:41:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:54.781 08:41:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:54.781 08:41:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:54.781 08:41:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:54.781 08:41:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:54.781 08:41:11 -- setup/hugepages.sh@83 -- # : 513 00:06:54.781 08:41:11 -- setup/hugepages.sh@84 -- # : 1 00:06:54.781 08:41:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:54.781 08:41:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:06:54.781 08:41:11 -- setup/hugepages.sh@83 -- # : 0 00:06:54.781 08:41:11 -- setup/hugepages.sh@84 -- # : 0 00:06:54.781 08:41:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:54.781 08:41:11 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:54.781 08:41:11 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:54.781 08:41:11 -- setup/hugepages.sh@160 -- # setup output 00:06:54.781 08:41:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:54.781 08:41:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:58.103 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:58.103 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:58.103 08:41:14 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:58.103 08:41:14 -- setup/hugepages.sh@89 -- # local node 00:06:58.103 08:41:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:06:58.103 08:41:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:06:58.103 08:41:14 -- setup/hugepages.sh@92 -- # local surp 00:06:58.103 08:41:14 -- setup/hugepages.sh@93 -- # local resv 00:06:58.103 08:41:14 -- setup/hugepages.sh@94 -- # local anon 00:06:58.103 08:41:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:58.103 08:41:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:58.103 08:41:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:58.103 08:41:14 -- setup/common.sh@18 -- # local node= 00:06:58.103 08:41:14 -- setup/common.sh@19 -- # local var val 00:06:58.103 08:41:14 -- setup/common.sh@20 -- # local mem_f mem 00:06:58.103 08:41:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.103 08:41:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.103 08:41:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.103 08:41:14 -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.103 08:41:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40004480 kB' 'MemAvailable: 44636616 kB' 'Buffers: 3728 kB' 'Cached: 14026024 kB' 'SwapCached: 0 kB' 'Active: 10971568 kB' 'Inactive: 3665332 kB' 'Active(anon): 9860412 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 610932 kB' 'Mapped: 214136 kB' 'Shmem: 9253264 kB' 'KReclaimable: 503356 kB' 'Slab: 1159780 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656424 kB' 'KernelStack: 22064 kB' 'PageTables: 9404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11255348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217136 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.103 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.103 08:41:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:58.104 08:41:14 -- setup/common.sh@33 -- # echo 0 00:06:58.104 08:41:14 -- setup/common.sh@33 -- # return 0 00:06:58.104 08:41:14 -- setup/hugepages.sh@97 -- # anon=0 00:06:58.104 08:41:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:58.104 08:41:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:58.104 08:41:14 -- setup/common.sh@18 -- # local node= 00:06:58.104 08:41:14 -- setup/common.sh@19 -- # local var val 00:06:58.104 08:41:14 -- setup/common.sh@20 -- # local mem_f mem 00:06:58.104 08:41:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.104 08:41:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.104 08:41:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.104 08:41:14 -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.104 08:41:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40003724 kB' 'MemAvailable: 44635860 kB' 'Buffers: 3728 kB' 'Cached: 14026024 kB' 'SwapCached: 0 kB' 'Active: 10971676 kB' 'Inactive: 3665332 kB' 'Active(anon): 9860520 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606020 kB' 'Mapped: 213964 kB' 'Shmem: 9253264 kB' 'KReclaimable: 503356 kB' 'Slab: 1159780 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656424 kB' 'KernelStack: 22064 kB' 'PageTables: 9420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11254424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217100 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.104 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.104 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.105 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.105 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.106 08:41:14 -- setup/common.sh@33 -- # echo 0 00:06:58.106 08:41:14 -- setup/common.sh@33 -- # return 0 00:06:58.106 08:41:14 -- setup/hugepages.sh@99 -- # surp=0 00:06:58.106 08:41:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:58.106 08:41:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:58.106 08:41:14 -- setup/common.sh@18 -- # local node= 00:06:58.106 08:41:14 -- setup/common.sh@19 -- # local var val 00:06:58.106 08:41:14 -- setup/common.sh@20 -- # local mem_f mem 00:06:58.106 08:41:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.106 08:41:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.106 08:41:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.106 08:41:14 -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.106 08:41:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40008396 kB' 'MemAvailable: 44640532 kB' 'Buffers: 3728 kB' 'Cached: 14026028 kB' 'SwapCached: 0 kB' 'Active: 10966928 kB' 'Inactive: 3665332 kB' 'Active(anon): 9855772 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605868 kB' 'Mapped: 213964 kB' 'Shmem: 9253268 kB' 'KReclaimable: 503356 kB' 'Slab: 1159804 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656448 kB' 'KernelStack: 22064 kB' 'PageTables: 9436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11249256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217084 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.106 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.106 08:41:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:58.107 08:41:14 -- setup/common.sh@33 -- # echo 0 00:06:58.107 08:41:14 -- setup/common.sh@33 -- # return 0 00:06:58.107 08:41:14 -- setup/hugepages.sh@100 -- # resv=0 00:06:58.107 08:41:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:58.107 nr_hugepages=1025 00:06:58.107 08:41:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:58.107 resv_hugepages=0 00:06:58.107 08:41:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:58.107 surplus_hugepages=0 00:06:58.107 08:41:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:58.107 anon_hugepages=0 00:06:58.107 08:41:14 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:58.107 08:41:14 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:58.107 08:41:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:58.107 08:41:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:58.107 08:41:14 -- setup/common.sh@18 -- # local node= 00:06:58.107 08:41:14 -- setup/common.sh@19 -- # local var val 00:06:58.107 08:41:14 -- setup/common.sh@20 -- # local mem_f mem 00:06:58.107 08:41:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.107 08:41:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:58.107 08:41:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:58.107 08:41:14 -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.107 08:41:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40009360 kB' 'MemAvailable: 44641496 kB' 'Buffers: 3728 kB' 'Cached: 14026052 kB' 'SwapCached: 0 kB' 'Active: 10966156 kB' 'Inactive: 3665332 kB' 'Active(anon): 9855000 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605096 kB' 'Mapped: 213616 kB' 'Shmem: 9253292 kB' 'KReclaimable: 503356 kB' 'Slab: 1159804 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656448 kB' 'KernelStack: 22064 kB' 'PageTables: 9408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 11249268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217084 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.107 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.107 08:41:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.108 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.108 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:58.109 08:41:14 -- setup/common.sh@33 -- # echo 1025 00:06:58.109 08:41:14 -- setup/common.sh@33 -- # return 0 00:06:58.109 08:41:14 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:58.109 08:41:14 -- setup/hugepages.sh@112 -- # get_nodes 00:06:58.109 08:41:14 -- setup/hugepages.sh@27 -- # local node 00:06:58.109 08:41:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:58.109 08:41:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:58.109 08:41:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:58.109 08:41:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:06:58.109 08:41:14 -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:58.109 08:41:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:58.109 08:41:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:58.109 08:41:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:58.109 08:41:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:58.109 08:41:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:58.109 08:41:14 -- setup/common.sh@18 -- # local node=0 00:06:58.109 08:41:14 -- setup/common.sh@19 -- # local var val 00:06:58.109 08:41:14 -- setup/common.sh@20 -- # local mem_f mem 00:06:58.109 08:41:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.109 08:41:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:58.109 08:41:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:58.109 08:41:14 -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.109 08:41:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19711184 kB' 'MemUsed: 12927956 kB' 'SwapCached: 0 kB' 'Active: 6802484 kB' 'Inactive: 3291180 kB' 'Active(anon): 6268352 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3291180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9690624 kB' 'Mapped: 121940 kB' 'AnonPages: 406264 kB' 'Shmem: 5865312 kB' 'KernelStack: 13048 kB' 'PageTables: 5232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 335520 kB' 'Slab: 671400 kB' 'SReclaimable: 335520 kB' 'SUnreclaim: 335880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.109 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.109 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@33 -- # echo 0 00:06:58.110 08:41:14 -- setup/common.sh@33 -- # return 0 00:06:58.110 08:41:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:58.110 08:41:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:58.110 08:41:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:58.110 08:41:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:58.110 08:41:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:58.110 08:41:14 -- setup/common.sh@18 -- # local node=1 00:06:58.110 08:41:14 -- setup/common.sh@19 -- # local var val 00:06:58.110 08:41:14 -- setup/common.sh@20 -- # local mem_f mem 00:06:58.110 08:41:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:58.110 08:41:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:58.110 08:41:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:58.110 08:41:14 -- setup/common.sh@28 -- # mapfile -t mem 00:06:58.110 08:41:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 20298176 kB' 'MemUsed: 7357900 kB' 'SwapCached: 0 kB' 'Active: 4163684 kB' 'Inactive: 374152 kB' 'Active(anon): 3586660 kB' 'Inactive(anon): 0 kB' 'Active(file): 577024 kB' 'Inactive(file): 374152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4339172 kB' 'Mapped: 91676 kB' 'AnonPages: 198836 kB' 'Shmem: 3387996 kB' 'KernelStack: 9016 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167836 kB' 'Slab: 488404 kB' 'SReclaimable: 167836 kB' 'SUnreclaim: 320568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.110 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.110 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # continue 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # IFS=': ' 00:06:58.111 08:41:14 -- setup/common.sh@31 -- # read -r var val _ 00:06:58.111 08:41:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:58.111 08:41:14 -- setup/common.sh@33 -- # echo 0 00:06:58.111 08:41:14 -- setup/common.sh@33 -- # return 0 00:06:58.111 08:41:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:58.111 08:41:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:58.111 08:41:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:58.111 08:41:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:58.111 08:41:14 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:06:58.111 node0=512 expecting 513 00:06:58.111 08:41:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:58.111 08:41:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:58.111 08:41:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:58.111 08:41:14 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:06:58.111 node1=513 expecting 512 00:06:58.111 08:41:14 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:06:58.111 00:06:58.111 real 0m3.206s 00:06:58.111 user 0m1.137s 00:06:58.111 sys 0m2.005s 00:06:58.111 08:41:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.111 08:41:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.111 ************************************ 00:06:58.111 END TEST odd_alloc 00:06:58.111 ************************************ 00:06:58.111 08:41:15 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:58.111 08:41:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.111 08:41:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.111 08:41:15 -- common/autotest_common.sh@10 -- # set +x 00:06:58.111 ************************************ 00:06:58.111 START TEST custom_alloc 00:06:58.111 ************************************ 00:06:58.111 08:41:15 -- common/autotest_common.sh@1111 -- # custom_alloc 00:06:58.111 08:41:15 -- setup/hugepages.sh@167 -- # local IFS=, 00:06:58.111 08:41:15 -- setup/hugepages.sh@169 -- # local node 00:06:58.111 08:41:15 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:58.111 08:41:15 -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:58.111 08:41:15 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:58.111 08:41:15 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:58.111 08:41:15 -- setup/hugepages.sh@49 -- # local size=1048576 00:06:58.111 08:41:15 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:58.111 08:41:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:58.111 08:41:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:58.111 08:41:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:58.111 08:41:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:58.111 08:41:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:58.111 08:41:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:58.111 08:41:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:58.111 08:41:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:58.111 08:41:15 -- setup/hugepages.sh@83 -- # : 256 00:06:58.111 08:41:15 -- setup/hugepages.sh@84 -- # : 1 00:06:58.111 08:41:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:58.111 08:41:15 -- setup/hugepages.sh@83 -- # : 0 00:06:58.111 08:41:15 -- setup/hugepages.sh@84 -- # : 0 00:06:58.111 08:41:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:58.111 08:41:15 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:06:58.111 08:41:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:06:58.111 08:41:15 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:58.111 08:41:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:58.111 08:41:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:58.111 08:41:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:58.111 08:41:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:58.111 08:41:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:58.111 08:41:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:58.111 08:41:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:58.111 08:41:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:58.111 08:41:15 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:58.112 08:41:15 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:58.112 08:41:15 -- setup/hugepages.sh@78 -- # return 0 00:06:58.112 08:41:15 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:06:58.112 08:41:15 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:58.112 08:41:15 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:58.112 08:41:15 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:58.112 08:41:15 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:58.112 08:41:15 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:58.112 08:41:15 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:58.112 08:41:15 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:58.112 08:41:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:58.112 08:41:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:06:58.112 08:41:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:58.112 08:41:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:58.112 08:41:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:58.112 08:41:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:58.112 08:41:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:58.112 08:41:15 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:06:58.112 08:41:15 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:58.112 08:41:15 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:58.112 08:41:15 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:58.112 08:41:15 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:06:58.112 08:41:15 -- setup/hugepages.sh@78 -- # return 0 00:06:58.112 08:41:15 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:06:58.112 08:41:15 -- setup/hugepages.sh@187 -- # setup output 00:06:58.112 08:41:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:06:58.112 08:41:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:00.640 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:00.640 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:00.903 08:41:17 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:07:00.903 08:41:17 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:07:00.903 08:41:17 -- setup/hugepages.sh@89 -- # local node 00:07:00.903 08:41:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:00.903 08:41:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:00.903 08:41:17 -- setup/hugepages.sh@92 -- # local surp 00:07:00.903 08:41:17 -- setup/hugepages.sh@93 -- # local resv 00:07:00.903 08:41:17 -- setup/hugepages.sh@94 -- # local anon 00:07:00.903 08:41:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:00.903 08:41:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:00.903 08:41:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:00.903 08:41:17 -- setup/common.sh@18 -- # local node= 00:07:00.903 08:41:17 -- setup/common.sh@19 -- # local var val 00:07:00.903 08:41:17 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.903 08:41:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.903 08:41:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.903 08:41:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.903 08:41:17 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.903 08:41:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 38987200 kB' 'MemAvailable: 43619336 kB' 'Buffers: 3728 kB' 'Cached: 14026144 kB' 'SwapCached: 0 kB' 'Active: 10966980 kB' 'Inactive: 3665332 kB' 'Active(anon): 9855824 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605632 kB' 'Mapped: 213708 kB' 'Shmem: 9253384 kB' 'KReclaimable: 503356 kB' 'Slab: 1160064 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656708 kB' 'KernelStack: 22064 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11249368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217100 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.903 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.903 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:00.904 08:41:17 -- setup/common.sh@33 -- # echo 0 00:07:00.904 08:41:17 -- setup/common.sh@33 -- # return 0 00:07:00.904 08:41:17 -- setup/hugepages.sh@97 -- # anon=0 00:07:00.904 08:41:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:00.904 08:41:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:00.904 08:41:17 -- setup/common.sh@18 -- # local node= 00:07:00.904 08:41:17 -- setup/common.sh@19 -- # local var val 00:07:00.904 08:41:17 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.904 08:41:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.904 08:41:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.904 08:41:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.904 08:41:17 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.904 08:41:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 38987316 kB' 'MemAvailable: 43619452 kB' 'Buffers: 3728 kB' 'Cached: 14026144 kB' 'SwapCached: 0 kB' 'Active: 10966988 kB' 'Inactive: 3665332 kB' 'Active(anon): 9855832 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605652 kB' 'Mapped: 213652 kB' 'Shmem: 9253384 kB' 'KReclaimable: 503356 kB' 'Slab: 1160028 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656672 kB' 'KernelStack: 22032 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11249512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217068 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:17 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:17 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.904 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.904 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.905 08:41:18 -- setup/common.sh@33 -- # echo 0 00:07:00.905 08:41:18 -- setup/common.sh@33 -- # return 0 00:07:00.905 08:41:18 -- setup/hugepages.sh@99 -- # surp=0 00:07:00.905 08:41:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:00.905 08:41:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:00.905 08:41:18 -- setup/common.sh@18 -- # local node= 00:07:00.905 08:41:18 -- setup/common.sh@19 -- # local var val 00:07:00.905 08:41:18 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.905 08:41:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.905 08:41:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.905 08:41:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.905 08:41:18 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.905 08:41:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 38987544 kB' 'MemAvailable: 43619680 kB' 'Buffers: 3728 kB' 'Cached: 14026156 kB' 'SwapCached: 0 kB' 'Active: 10966676 kB' 'Inactive: 3665332 kB' 'Active(anon): 9855520 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 605364 kB' 'Mapped: 213652 kB' 'Shmem: 9253396 kB' 'KReclaimable: 503356 kB' 'Slab: 1160004 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656648 kB' 'KernelStack: 22048 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11249532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217068 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.905 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.905 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:00.906 08:41:18 -- setup/common.sh@33 -- # echo 0 00:07:00.906 08:41:18 -- setup/common.sh@33 -- # return 0 00:07:00.906 08:41:18 -- setup/hugepages.sh@100 -- # resv=0 00:07:00.906 08:41:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:07:00.906 nr_hugepages=1536 00:07:00.906 08:41:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:00.906 resv_hugepages=0 00:07:00.906 08:41:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:00.906 surplus_hugepages=0 00:07:00.906 08:41:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:00.906 anon_hugepages=0 00:07:00.906 08:41:18 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:00.906 08:41:18 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:07:00.906 08:41:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:00.906 08:41:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:00.906 08:41:18 -- setup/common.sh@18 -- # local node= 00:07:00.906 08:41:18 -- setup/common.sh@19 -- # local var val 00:07:00.906 08:41:18 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.906 08:41:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.906 08:41:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:00.906 08:41:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:00.906 08:41:18 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.906 08:41:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 38987292 kB' 'MemAvailable: 43619428 kB' 'Buffers: 3728 kB' 'Cached: 14026184 kB' 'SwapCached: 0 kB' 'Active: 10967628 kB' 'Inactive: 3665332 kB' 'Active(anon): 9856472 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 606304 kB' 'Mapped: 213652 kB' 'Shmem: 9253424 kB' 'KReclaimable: 503356 kB' 'Slab: 1160004 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656648 kB' 'KernelStack: 22112 kB' 'PageTables: 9520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 11249912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217084 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.906 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.906 08:41:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.907 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:00.907 08:41:18 -- setup/common.sh@33 -- # echo 1536 00:07:00.907 08:41:18 -- setup/common.sh@33 -- # return 0 00:07:00.907 08:41:18 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:07:00.907 08:41:18 -- setup/hugepages.sh@112 -- # get_nodes 00:07:00.907 08:41:18 -- setup/hugepages.sh@27 -- # local node 00:07:00.907 08:41:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:00.907 08:41:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:07:00.907 08:41:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:00.907 08:41:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:00.907 08:41:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:00.907 08:41:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:00.907 08:41:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:00.907 08:41:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:00.907 08:41:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:00.907 08:41:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:00.907 08:41:18 -- setup/common.sh@18 -- # local node=0 00:07:00.907 08:41:18 -- setup/common.sh@19 -- # local var val 00:07:00.907 08:41:18 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.907 08:41:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.907 08:41:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:00.907 08:41:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:00.907 08:41:18 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.907 08:41:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.907 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19733472 kB' 'MemUsed: 12905668 kB' 'SwapCached: 0 kB' 'Active: 6803380 kB' 'Inactive: 3291180 kB' 'Active(anon): 6269248 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3291180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9690728 kB' 'Mapped: 121944 kB' 'AnonPages: 407048 kB' 'Shmem: 5865416 kB' 'KernelStack: 13064 kB' 'PageTables: 5276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 335520 kB' 'Slab: 671560 kB' 'SReclaimable: 335520 kB' 'SUnreclaim: 336040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@33 -- # echo 0 00:07:00.908 08:41:18 -- setup/common.sh@33 -- # return 0 00:07:00.908 08:41:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:00.908 08:41:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:00.908 08:41:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:00.908 08:41:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:07:00.908 08:41:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:00.908 08:41:18 -- setup/common.sh@18 -- # local node=1 00:07:00.908 08:41:18 -- setup/common.sh@19 -- # local var val 00:07:00.908 08:41:18 -- setup/common.sh@20 -- # local mem_f mem 00:07:00.908 08:41:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:00.908 08:41:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:07:00.908 08:41:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:07:00.908 08:41:18 -- setup/common.sh@28 -- # mapfile -t mem 00:07:00.908 08:41:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656076 kB' 'MemFree: 19254188 kB' 'MemUsed: 8401888 kB' 'SwapCached: 0 kB' 'Active: 4163560 kB' 'Inactive: 374152 kB' 'Active(anon): 3586536 kB' 'Inactive(anon): 0 kB' 'Active(file): 577024 kB' 'Inactive(file): 374152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4339188 kB' 'Mapped: 91708 kB' 'AnonPages: 198588 kB' 'Shmem: 3388012 kB' 'KernelStack: 9016 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 167836 kB' 'Slab: 488444 kB' 'SReclaimable: 167836 kB' 'SUnreclaim: 320608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.908 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.908 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # continue 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # IFS=': ' 00:07:00.909 08:41:18 -- setup/common.sh@31 -- # read -r var val _ 00:07:00.909 08:41:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:00.909 08:41:18 -- setup/common.sh@33 -- # echo 0 00:07:00.909 08:41:18 -- setup/common.sh@33 -- # return 0 00:07:00.909 08:41:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:00.909 08:41:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:00.909 08:41:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:00.909 08:41:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:00.909 08:41:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:07:00.909 node0=512 expecting 512 00:07:00.909 08:41:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:00.909 08:41:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:00.909 08:41:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:00.909 08:41:18 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:07:00.909 node1=1024 expecting 1024 00:07:00.909 08:41:18 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:07:00.909 00:07:00.909 real 0m2.923s 00:07:00.909 user 0m0.941s 00:07:00.909 sys 0m1.880s 00:07:00.909 08:41:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.909 08:41:18 -- common/autotest_common.sh@10 -- # set +x 00:07:00.909 ************************************ 00:07:00.909 END TEST custom_alloc 00:07:00.909 ************************************ 00:07:00.909 08:41:18 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:07:00.909 08:41:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.909 08:41:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.909 08:41:18 -- common/autotest_common.sh@10 -- # set +x 00:07:01.168 ************************************ 00:07:01.168 START TEST no_shrink_alloc 00:07:01.168 ************************************ 00:07:01.168 08:41:18 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:07:01.168 08:41:18 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:07:01.168 08:41:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:07:01.168 08:41:18 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:07:01.168 08:41:18 -- setup/hugepages.sh@51 -- # shift 00:07:01.168 08:41:18 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:07:01.168 08:41:18 -- setup/hugepages.sh@52 -- # local node_ids 00:07:01.168 08:41:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:07:01.168 08:41:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:07:01.168 08:41:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:07:01.168 08:41:18 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:07:01.168 08:41:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:07:01.168 08:41:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:07:01.168 08:41:18 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:07:01.168 08:41:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:07:01.168 08:41:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:07:01.168 08:41:18 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:07:01.168 08:41:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:07:01.168 08:41:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:07:01.168 08:41:18 -- setup/hugepages.sh@73 -- # return 0 00:07:01.168 08:41:18 -- setup/hugepages.sh@198 -- # setup output 00:07:01.168 08:41:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:01.168 08:41:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:04.452 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:04.452 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:04.452 08:41:21 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:07:04.452 08:41:21 -- setup/hugepages.sh@89 -- # local node 00:07:04.452 08:41:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:04.452 08:41:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:04.452 08:41:21 -- setup/hugepages.sh@92 -- # local surp 00:07:04.452 08:41:21 -- setup/hugepages.sh@93 -- # local resv 00:07:04.452 08:41:21 -- setup/hugepages.sh@94 -- # local anon 00:07:04.452 08:41:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:04.452 08:41:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:04.452 08:41:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:04.452 08:41:21 -- setup/common.sh@18 -- # local node= 00:07:04.452 08:41:21 -- setup/common.sh@19 -- # local var val 00:07:04.452 08:41:21 -- setup/common.sh@20 -- # local mem_f mem 00:07:04.452 08:41:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:04.452 08:41:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:04.452 08:41:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:04.452 08:41:21 -- setup/common.sh@28 -- # mapfile -t mem 00:07:04.452 08:41:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40003960 kB' 'MemAvailable: 44636096 kB' 'Buffers: 3728 kB' 'Cached: 14026284 kB' 'SwapCached: 0 kB' 'Active: 10969076 kB' 'Inactive: 3665332 kB' 'Active(anon): 9857920 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607740 kB' 'Mapped: 214640 kB' 'Shmem: 9253524 kB' 'KReclaimable: 503356 kB' 'Slab: 1159744 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656388 kB' 'KernelStack: 22304 kB' 'PageTables: 9748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11254164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217356 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.452 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.452 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:04.453 08:41:21 -- setup/common.sh@33 -- # echo 0 00:07:04.453 08:41:21 -- setup/common.sh@33 -- # return 0 00:07:04.453 08:41:21 -- setup/hugepages.sh@97 -- # anon=0 00:07:04.453 08:41:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:04.453 08:41:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:04.453 08:41:21 -- setup/common.sh@18 -- # local node= 00:07:04.453 08:41:21 -- setup/common.sh@19 -- # local var val 00:07:04.453 08:41:21 -- setup/common.sh@20 -- # local mem_f mem 00:07:04.453 08:41:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:04.453 08:41:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:04.453 08:41:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:04.453 08:41:21 -- setup/common.sh@28 -- # mapfile -t mem 00:07:04.453 08:41:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40005152 kB' 'MemAvailable: 44637288 kB' 'Buffers: 3728 kB' 'Cached: 14026288 kB' 'SwapCached: 0 kB' 'Active: 10969928 kB' 'Inactive: 3665332 kB' 'Active(anon): 9858772 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 608488 kB' 'Mapped: 214188 kB' 'Shmem: 9253528 kB' 'KReclaimable: 503356 kB' 'Slab: 1159744 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656388 kB' 'KernelStack: 21984 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11256128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217276 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.453 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.453 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.454 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.454 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.715 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.715 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.716 08:41:21 -- setup/common.sh@33 -- # echo 0 00:07:04.716 08:41:21 -- setup/common.sh@33 -- # return 0 00:07:04.716 08:41:21 -- setup/hugepages.sh@99 -- # surp=0 00:07:04.716 08:41:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:04.716 08:41:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:04.716 08:41:21 -- setup/common.sh@18 -- # local node= 00:07:04.716 08:41:21 -- setup/common.sh@19 -- # local var val 00:07:04.716 08:41:21 -- setup/common.sh@20 -- # local mem_f mem 00:07:04.716 08:41:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:04.716 08:41:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:04.716 08:41:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:04.716 08:41:21 -- setup/common.sh@28 -- # mapfile -t mem 00:07:04.716 08:41:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39999500 kB' 'MemAvailable: 44631636 kB' 'Buffers: 3728 kB' 'Cached: 14026300 kB' 'SwapCached: 0 kB' 'Active: 10974020 kB' 'Inactive: 3665332 kB' 'Active(anon): 9862864 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 612540 kB' 'Mapped: 214600 kB' 'Shmem: 9253540 kB' 'KReclaimable: 503356 kB' 'Slab: 1159824 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656468 kB' 'KernelStack: 22064 kB' 'PageTables: 9820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11257940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217280 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.716 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.716 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:04.717 08:41:21 -- setup/common.sh@33 -- # echo 0 00:07:04.717 08:41:21 -- setup/common.sh@33 -- # return 0 00:07:04.717 08:41:21 -- setup/hugepages.sh@100 -- # resv=0 00:07:04.717 08:41:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:04.717 nr_hugepages=1024 00:07:04.717 08:41:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:04.717 resv_hugepages=0 00:07:04.717 08:41:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:04.717 surplus_hugepages=0 00:07:04.717 08:41:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:04.717 anon_hugepages=0 00:07:04.717 08:41:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:04.717 08:41:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:04.717 08:41:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:04.717 08:41:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:04.717 08:41:21 -- setup/common.sh@18 -- # local node= 00:07:04.717 08:41:21 -- setup/common.sh@19 -- # local var val 00:07:04.717 08:41:21 -- setup/common.sh@20 -- # local mem_f mem 00:07:04.717 08:41:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:04.717 08:41:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:04.717 08:41:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:04.717 08:41:21 -- setup/common.sh@28 -- # mapfile -t mem 00:07:04.717 08:41:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40001208 kB' 'MemAvailable: 44633344 kB' 'Buffers: 3728 kB' 'Cached: 14026312 kB' 'SwapCached: 0 kB' 'Active: 10969012 kB' 'Inactive: 3665332 kB' 'Active(anon): 9857856 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607636 kB' 'Mapped: 214096 kB' 'Shmem: 9253552 kB' 'KReclaimable: 503356 kB' 'Slab: 1160036 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656680 kB' 'KernelStack: 22096 kB' 'PageTables: 9360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11250688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217148 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.717 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.717 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.718 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.718 08:41:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:04.718 08:41:21 -- setup/common.sh@33 -- # echo 1024 00:07:04.718 08:41:21 -- setup/common.sh@33 -- # return 0 00:07:04.718 08:41:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:04.718 08:41:21 -- setup/hugepages.sh@112 -- # get_nodes 00:07:04.718 08:41:21 -- setup/hugepages.sh@27 -- # local node 00:07:04.718 08:41:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:04.718 08:41:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:04.718 08:41:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:04.718 08:41:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:04.718 08:41:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:04.718 08:41:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:04.718 08:41:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:04.718 08:41:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:04.718 08:41:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:04.719 08:41:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:04.719 08:41:21 -- setup/common.sh@18 -- # local node=0 00:07:04.719 08:41:21 -- setup/common.sh@19 -- # local var val 00:07:04.719 08:41:21 -- setup/common.sh@20 -- # local mem_f mem 00:07:04.719 08:41:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:04.719 08:41:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:04.719 08:41:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:04.719 08:41:21 -- setup/common.sh@28 -- # mapfile -t mem 00:07:04.719 08:41:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18679496 kB' 'MemUsed: 13959644 kB' 'SwapCached: 0 kB' 'Active: 6805008 kB' 'Inactive: 3291180 kB' 'Active(anon): 6270876 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3291180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9690860 kB' 'Mapped: 121944 kB' 'AnonPages: 408564 kB' 'Shmem: 5865548 kB' 'KernelStack: 13080 kB' 'PageTables: 5340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 335520 kB' 'Slab: 671384 kB' 'SReclaimable: 335520 kB' 'SUnreclaim: 335864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.719 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.719 08:41:21 -- setup/common.sh@32 -- # continue 00:07:04.720 08:41:21 -- setup/common.sh@31 -- # IFS=': ' 00:07:04.720 08:41:21 -- setup/common.sh@31 -- # read -r var val _ 00:07:04.720 08:41:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:04.720 08:41:21 -- setup/common.sh@33 -- # echo 0 00:07:04.720 08:41:21 -- setup/common.sh@33 -- # return 0 00:07:04.720 08:41:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:04.720 08:41:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:04.720 08:41:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:04.720 08:41:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:04.720 08:41:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:04.720 node0=1024 expecting 1024 00:07:04.720 08:41:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:04.720 08:41:21 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:07:04.720 08:41:21 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:07:04.720 08:41:21 -- setup/hugepages.sh@202 -- # setup output 00:07:04.720 08:41:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:04.720 08:41:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:08.009 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:07:08.009 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:07:08.009 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:07:08.009 08:41:24 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:07:08.009 08:41:24 -- setup/hugepages.sh@89 -- # local node 00:07:08.009 08:41:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:07:08.009 08:41:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:07:08.009 08:41:24 -- setup/hugepages.sh@92 -- # local surp 00:07:08.009 08:41:24 -- setup/hugepages.sh@93 -- # local resv 00:07:08.009 08:41:24 -- setup/hugepages.sh@94 -- # local anon 00:07:08.009 08:41:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:07:08.009 08:41:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:07:08.009 08:41:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:07:08.009 08:41:24 -- setup/common.sh@18 -- # local node= 00:07:08.009 08:41:24 -- setup/common.sh@19 -- # local var val 00:07:08.009 08:41:24 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.009 08:41:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.009 08:41:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.009 08:41:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.009 08:41:24 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.009 08:41:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.009 08:41:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39992816 kB' 'MemAvailable: 44624952 kB' 'Buffers: 3728 kB' 'Cached: 14026380 kB' 'SwapCached: 0 kB' 'Active: 10968740 kB' 'Inactive: 3665332 kB' 'Active(anon): 9857584 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607108 kB' 'Mapped: 213700 kB' 'Shmem: 9253620 kB' 'KReclaimable: 503356 kB' 'Slab: 1159908 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656552 kB' 'KernelStack: 22160 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11253664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217308 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.009 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.009 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.010 08:41:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:07:08.010 08:41:24 -- setup/common.sh@33 -- # echo 0 00:07:08.010 08:41:24 -- setup/common.sh@33 -- # return 0 00:07:08.010 08:41:24 -- setup/hugepages.sh@97 -- # anon=0 00:07:08.010 08:41:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:07:08.010 08:41:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:08.010 08:41:24 -- setup/common.sh@18 -- # local node= 00:07:08.010 08:41:24 -- setup/common.sh@19 -- # local var val 00:07:08.010 08:41:24 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.010 08:41:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.010 08:41:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.010 08:41:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.010 08:41:24 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.010 08:41:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.010 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 40000764 kB' 'MemAvailable: 44632900 kB' 'Buffers: 3728 kB' 'Cached: 14026380 kB' 'SwapCached: 0 kB' 'Active: 10970544 kB' 'Inactive: 3665332 kB' 'Active(anon): 9859388 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 609016 kB' 'Mapped: 213688 kB' 'Shmem: 9253620 kB' 'KReclaimable: 503356 kB' 'Slab: 1159908 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656552 kB' 'KernelStack: 22272 kB' 'PageTables: 9640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11301688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217260 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.011 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.011 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.012 08:41:24 -- setup/common.sh@33 -- # echo 0 00:07:08.012 08:41:24 -- setup/common.sh@33 -- # return 0 00:07:08.012 08:41:24 -- setup/hugepages.sh@99 -- # surp=0 00:07:08.012 08:41:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:07:08.012 08:41:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:07:08.012 08:41:24 -- setup/common.sh@18 -- # local node= 00:07:08.012 08:41:24 -- setup/common.sh@19 -- # local var val 00:07:08.012 08:41:24 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.012 08:41:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.012 08:41:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.012 08:41:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.012 08:41:24 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.012 08:41:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39999236 kB' 'MemAvailable: 44631372 kB' 'Buffers: 3728 kB' 'Cached: 14026392 kB' 'SwapCached: 0 kB' 'Active: 10968848 kB' 'Inactive: 3665332 kB' 'Active(anon): 9857692 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607352 kB' 'Mapped: 213688 kB' 'Shmem: 9253632 kB' 'KReclaimable: 503356 kB' 'Slab: 1159972 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656616 kB' 'KernelStack: 22256 kB' 'PageTables: 9840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11251808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217228 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.012 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.012 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:07:08.013 08:41:24 -- setup/common.sh@33 -- # echo 0 00:07:08.013 08:41:24 -- setup/common.sh@33 -- # return 0 00:07:08.013 08:41:24 -- setup/hugepages.sh@100 -- # resv=0 00:07:08.013 08:41:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:07:08.013 nr_hugepages=1024 00:07:08.013 08:41:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:07:08.013 resv_hugepages=0 00:07:08.013 08:41:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:07:08.013 surplus_hugepages=0 00:07:08.013 08:41:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:07:08.013 anon_hugepages=0 00:07:08.013 08:41:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:08.013 08:41:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:07:08.013 08:41:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:07:08.013 08:41:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:07:08.013 08:41:24 -- setup/common.sh@18 -- # local node= 00:07:08.013 08:41:24 -- setup/common.sh@19 -- # local var val 00:07:08.013 08:41:24 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.013 08:41:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.013 08:41:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:07:08.013 08:41:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:07:08.013 08:41:24 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.013 08:41:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295216 kB' 'MemFree: 39998908 kB' 'MemAvailable: 44631044 kB' 'Buffers: 3728 kB' 'Cached: 14026408 kB' 'SwapCached: 0 kB' 'Active: 10969096 kB' 'Inactive: 3665332 kB' 'Active(anon): 9857940 kB' 'Inactive(anon): 0 kB' 'Active(file): 1111156 kB' 'Inactive(file): 3665332 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 607480 kB' 'Mapped: 213688 kB' 'Shmem: 9253648 kB' 'KReclaimable: 503356 kB' 'Slab: 1159972 kB' 'SReclaimable: 503356 kB' 'SUnreclaim: 656616 kB' 'KernelStack: 22144 kB' 'PageTables: 9592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 11250672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217180 kB' 'VmallocChunk: 0 kB' 'Percpu: 130816 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3296628 kB' 'DirectMap2M: 18409472 kB' 'DirectMap1G: 47185920 kB' 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.013 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.013 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.014 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.014 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:07:08.015 08:41:24 -- setup/common.sh@33 -- # echo 1024 00:07:08.015 08:41:24 -- setup/common.sh@33 -- # return 0 00:07:08.015 08:41:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:07:08.015 08:41:24 -- setup/hugepages.sh@112 -- # get_nodes 00:07:08.015 08:41:24 -- setup/hugepages.sh@27 -- # local node 00:07:08.015 08:41:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:08.015 08:41:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:07:08.015 08:41:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:07:08.015 08:41:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:07:08.015 08:41:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:07:08.015 08:41:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:07:08.015 08:41:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:07:08.015 08:41:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:07:08.015 08:41:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:07:08.015 08:41:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:07:08.015 08:41:24 -- setup/common.sh@18 -- # local node=0 00:07:08.015 08:41:24 -- setup/common.sh@19 -- # local var val 00:07:08.015 08:41:24 -- setup/common.sh@20 -- # local mem_f mem 00:07:08.015 08:41:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:07:08.015 08:41:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:07:08.015 08:41:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:07:08.015 08:41:24 -- setup/common.sh@28 -- # mapfile -t mem 00:07:08.015 08:41:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18672836 kB' 'MemUsed: 13966304 kB' 'SwapCached: 0 kB' 'Active: 6803208 kB' 'Inactive: 3291180 kB' 'Active(anon): 6269076 kB' 'Inactive(anon): 0 kB' 'Active(file): 534132 kB' 'Inactive(file): 3291180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9690928 kB' 'Mapped: 121944 kB' 'AnonPages: 406596 kB' 'Shmem: 5865616 kB' 'KernelStack: 13048 kB' 'PageTables: 5236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 335520 kB' 'Slab: 671432 kB' 'SReclaimable: 335520 kB' 'SUnreclaim: 335912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.015 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.015 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # continue 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # IFS=': ' 00:07:08.016 08:41:24 -- setup/common.sh@31 -- # read -r var val _ 00:07:08.016 08:41:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:07:08.016 08:41:24 -- setup/common.sh@33 -- # echo 0 00:07:08.016 08:41:24 -- setup/common.sh@33 -- # return 0 00:07:08.016 08:41:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:07:08.016 08:41:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:07:08.016 08:41:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:07:08.016 08:41:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:07:08.016 08:41:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:07:08.016 node0=1024 expecting 1024 00:07:08.016 08:41:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:07:08.016 00:07:08.016 real 0m6.646s 00:07:08.016 user 0m2.389s 00:07:08.016 sys 0m4.272s 00:07:08.016 08:41:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.016 08:41:24 -- common/autotest_common.sh@10 -- # set +x 00:07:08.016 ************************************ 00:07:08.016 END TEST no_shrink_alloc 00:07:08.016 ************************************ 00:07:08.016 08:41:24 -- setup/hugepages.sh@217 -- # clear_hp 00:07:08.016 08:41:24 -- setup/hugepages.sh@37 -- # local node hp 00:07:08.016 08:41:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:08.016 08:41:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:08.016 08:41:24 -- setup/hugepages.sh@41 -- # echo 0 00:07:08.016 08:41:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:08.016 08:41:24 -- setup/hugepages.sh@41 -- # echo 0 00:07:08.016 08:41:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:07:08.016 08:41:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:08.016 08:41:24 -- setup/hugepages.sh@41 -- # echo 0 00:07:08.016 08:41:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:07:08.016 08:41:24 -- setup/hugepages.sh@41 -- # echo 0 00:07:08.016 08:41:24 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:07:08.016 08:41:24 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:07:08.016 00:07:08.016 real 0m25.483s 00:07:08.016 user 0m8.296s 00:07:08.016 sys 0m15.322s 00:07:08.016 08:41:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.016 08:41:24 -- common/autotest_common.sh@10 -- # set +x 00:07:08.016 ************************************ 00:07:08.016 END TEST hugepages 00:07:08.016 ************************************ 00:07:08.016 08:41:25 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:07:08.016 08:41:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.016 08:41:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.016 08:41:25 -- common/autotest_common.sh@10 -- # set +x 00:07:08.016 ************************************ 00:07:08.017 START TEST driver 00:07:08.017 ************************************ 00:07:08.017 08:41:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:07:08.276 * Looking for test storage... 00:07:08.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:08.276 08:41:25 -- setup/driver.sh@68 -- # setup reset 00:07:08.276 08:41:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:08.276 08:41:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:13.553 08:41:30 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:07:13.553 08:41:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.553 08:41:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.553 08:41:30 -- common/autotest_common.sh@10 -- # set +x 00:07:13.553 ************************************ 00:07:13.553 START TEST guess_driver 00:07:13.553 ************************************ 00:07:13.553 08:41:30 -- common/autotest_common.sh@1111 -- # guess_driver 00:07:13.553 08:41:30 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:07:13.553 08:41:30 -- setup/driver.sh@47 -- # local fail=0 00:07:13.553 08:41:30 -- setup/driver.sh@49 -- # pick_driver 00:07:13.553 08:41:30 -- setup/driver.sh@36 -- # vfio 00:07:13.553 08:41:30 -- setup/driver.sh@21 -- # local iommu_grups 00:07:13.553 08:41:30 -- setup/driver.sh@22 -- # local unsafe_vfio 00:07:13.553 08:41:30 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:07:13.553 08:41:30 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:07:13.553 08:41:30 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:07:13.553 08:41:30 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:07:13.553 08:41:30 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:07:13.553 08:41:30 -- setup/driver.sh@14 -- # mod vfio_pci 00:07:13.553 08:41:30 -- setup/driver.sh@12 -- # dep vfio_pci 00:07:13.553 08:41:30 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:07:13.553 08:41:30 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:07:13.553 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:13.553 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:13.553 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:07:13.553 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:07:13.553 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:07:13.553 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:07:13.553 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:07:13.553 08:41:30 -- setup/driver.sh@30 -- # return 0 00:07:13.553 08:41:30 -- setup/driver.sh@37 -- # echo vfio-pci 00:07:13.553 08:41:30 -- setup/driver.sh@49 -- # driver=vfio-pci 00:07:13.553 08:41:30 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:07:13.553 08:41:30 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:07:13.553 Looking for driver=vfio-pci 00:07:13.553 08:41:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:13.553 08:41:30 -- setup/driver.sh@45 -- # setup output config 00:07:13.553 08:41:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:13.553 08:41:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:16.088 08:41:33 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:16.088 08:41:33 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:16.088 08:41:33 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:17.995 08:41:34 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:07:17.995 08:41:34 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:07:17.995 08:41:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:07:17.995 08:41:34 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:07:17.995 08:41:34 -- setup/driver.sh@65 -- # setup reset 00:07:17.995 08:41:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:17.995 08:41:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:23.269 00:07:23.269 real 0m9.281s 00:07:23.269 user 0m2.287s 00:07:23.269 sys 0m4.644s 00:07:23.269 08:41:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.269 08:41:39 -- common/autotest_common.sh@10 -- # set +x 00:07:23.269 ************************************ 00:07:23.269 END TEST guess_driver 00:07:23.269 ************************************ 00:07:23.269 00:07:23.269 real 0m14.404s 00:07:23.269 user 0m3.747s 00:07:23.269 sys 0m7.515s 00:07:23.269 08:41:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.269 08:41:39 -- common/autotest_common.sh@10 -- # set +x 00:07:23.269 ************************************ 00:07:23.269 END TEST driver 00:07:23.269 ************************************ 00:07:23.269 08:41:39 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:07:23.269 08:41:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.269 08:41:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.269 08:41:39 -- common/autotest_common.sh@10 -- # set +x 00:07:23.269 ************************************ 00:07:23.269 START TEST devices 00:07:23.269 ************************************ 00:07:23.269 08:41:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:07:23.269 * Looking for test storage... 00:07:23.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:07:23.269 08:41:39 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:07:23.269 08:41:39 -- setup/devices.sh@192 -- # setup reset 00:07:23.269 08:41:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:07:23.269 08:41:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:26.560 08:41:43 -- setup/devices.sh@194 -- # get_zoned_devs 00:07:26.560 08:41:43 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:26.560 08:41:43 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:26.560 08:41:43 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:26.560 08:41:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:26.560 08:41:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:26.560 08:41:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:26.560 08:41:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:26.560 08:41:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:26.560 08:41:43 -- setup/devices.sh@196 -- # blocks=() 00:07:26.560 08:41:43 -- setup/devices.sh@196 -- # declare -a blocks 00:07:26.560 08:41:43 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:26.560 08:41:43 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:26.560 08:41:43 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:26.560 08:41:43 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:26.560 08:41:43 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:26.560 08:41:43 -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:26.560 08:41:43 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:07:26.560 08:41:43 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:07:26.560 08:41:43 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:26.560 08:41:43 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:07:26.560 08:41:43 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:07:26.560 No valid GPT data, bailing 00:07:26.560 08:41:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:26.560 08:41:43 -- scripts/common.sh@391 -- # pt= 00:07:26.560 08:41:43 -- scripts/common.sh@392 -- # return 1 00:07:26.560 08:41:43 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:26.560 08:41:43 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:26.560 08:41:43 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:26.560 08:41:43 -- setup/common.sh@80 -- # echo 1600321314816 00:07:26.560 08:41:43 -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:07:26.560 08:41:43 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:26.560 08:41:43 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:07:26.560 08:41:43 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:26.560 08:41:43 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:26.560 08:41:43 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:26.560 08:41:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:26.560 08:41:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.560 08:41:43 -- common/autotest_common.sh@10 -- # set +x 00:07:26.560 ************************************ 00:07:26.560 START TEST nvme_mount 00:07:26.560 ************************************ 00:07:26.560 08:41:43 -- common/autotest_common.sh@1111 -- # nvme_mount 00:07:26.560 08:41:43 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:26.560 08:41:43 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:26.560 08:41:43 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:26.560 08:41:43 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:26.560 08:41:43 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:26.560 08:41:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:26.560 08:41:43 -- setup/common.sh@40 -- # local part_no=1 00:07:26.560 08:41:43 -- setup/common.sh@41 -- # local size=1073741824 00:07:26.560 08:41:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:26.560 08:41:43 -- setup/common.sh@44 -- # parts=() 00:07:26.560 08:41:43 -- setup/common.sh@44 -- # local parts 00:07:26.560 08:41:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:07:26.560 08:41:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:26.560 08:41:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:26.560 08:41:43 -- setup/common.sh@46 -- # (( part++ )) 00:07:26.560 08:41:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:26.560 08:41:43 -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:26.560 08:41:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:26.560 08:41:43 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:27.499 Creating new GPT entries in memory. 00:07:27.499 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:27.499 other utilities. 00:07:27.499 08:41:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:07:27.499 08:41:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:27.499 08:41:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:27.499 08:41:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:27.499 08:41:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:28.436 Creating new GPT entries in memory. 00:07:28.436 The operation has completed successfully. 00:07:28.436 08:41:45 -- setup/common.sh@57 -- # (( part++ )) 00:07:28.436 08:41:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:28.436 08:41:45 -- setup/common.sh@62 -- # wait 1889514 00:07:28.436 08:41:45 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:28.436 08:41:45 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:07:28.436 08:41:45 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:28.436 08:41:45 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:28.436 08:41:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:28.695 08:41:45 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:28.695 08:41:45 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:28.695 08:41:45 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:28.695 08:41:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:28.695 08:41:45 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:28.695 08:41:45 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:28.695 08:41:45 -- setup/devices.sh@53 -- # local found=0 00:07:28.695 08:41:45 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:28.695 08:41:45 -- setup/devices.sh@56 -- # : 00:07:28.695 08:41:45 -- setup/devices.sh@59 -- # local pci status 00:07:28.695 08:41:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.695 08:41:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:28.695 08:41:45 -- setup/devices.sh@47 -- # setup output config 00:07:28.695 08:41:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:28.695 08:41:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:31.263 08:41:48 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:31.263 08:41:48 -- setup/devices.sh@63 -- # found=1 00:07:31.263 08:41:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.523 08:41:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:31.523 08:41:48 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:31.523 08:41:48 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:31.523 08:41:48 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:31.523 08:41:48 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:31.523 08:41:48 -- setup/devices.sh@110 -- # cleanup_nvme 00:07:31.523 08:41:48 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:31.523 08:41:48 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:31.523 08:41:48 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:31.523 08:41:48 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:31.523 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:31.523 08:41:48 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:31.523 08:41:48 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:31.782 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:31.782 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:07:31.782 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:31.782 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:31.782 08:41:48 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:07:31.782 08:41:48 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:07:31.782 08:41:48 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:31.782 08:41:48 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:31.782 08:41:48 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:31.782 08:41:48 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:31.782 08:41:49 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:31.782 08:41:49 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:31.782 08:41:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:31.782 08:41:49 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:31.782 08:41:49 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:31.782 08:41:49 -- setup/devices.sh@53 -- # local found=0 00:07:31.782 08:41:49 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:31.782 08:41:49 -- setup/devices.sh@56 -- # : 00:07:31.782 08:41:49 -- setup/devices.sh@59 -- # local pci status 00:07:31.782 08:41:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.782 08:41:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:31.782 08:41:49 -- setup/devices.sh@47 -- # setup output config 00:07:31.782 08:41:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:31.782 08:41:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:35.073 08:41:51 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:35.073 08:41:51 -- setup/devices.sh@63 -- # found=1 00:07:35.073 08:41:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:35.073 08:41:52 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:35.073 08:41:52 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:35.073 08:41:52 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:35.073 08:41:52 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:35.073 08:41:52 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:35.073 08:41:52 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:07:35.073 08:41:52 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:35.073 08:41:52 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:35.073 08:41:52 -- setup/devices.sh@50 -- # local mount_point= 00:07:35.073 08:41:52 -- setup/devices.sh@51 -- # local test_file= 00:07:35.073 08:41:52 -- setup/devices.sh@53 -- # local found=0 00:07:35.073 08:41:52 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:35.073 08:41:52 -- setup/devices.sh@59 -- # local pci status 00:07:35.073 08:41:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:35.073 08:41:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:35.073 08:41:52 -- setup/devices.sh@47 -- # setup output config 00:07:35.073 08:41:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:35.073 08:41:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.364 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.364 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.365 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.365 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.365 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.365 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.365 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.365 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.365 08:41:55 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:38.365 08:41:55 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:38.365 08:41:55 -- setup/devices.sh@63 -- # found=1 00:07:38.365 08:41:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:38.365 08:41:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:38.365 08:41:55 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:38.365 08:41:55 -- setup/devices.sh@68 -- # return 0 00:07:38.365 08:41:55 -- setup/devices.sh@128 -- # cleanup_nvme 00:07:38.365 08:41:55 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:38.365 08:41:55 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:38.365 08:41:55 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:38.365 08:41:55 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:38.365 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:38.365 00:07:38.365 real 0m11.993s 00:07:38.365 user 0m3.220s 00:07:38.365 sys 0m6.602s 00:07:38.365 08:41:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.365 08:41:55 -- common/autotest_common.sh@10 -- # set +x 00:07:38.365 ************************************ 00:07:38.365 END TEST nvme_mount 00:07:38.365 ************************************ 00:07:38.624 08:41:55 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:38.624 08:41:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.624 08:41:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.624 08:41:55 -- common/autotest_common.sh@10 -- # set +x 00:07:38.624 ************************************ 00:07:38.624 START TEST dm_mount 00:07:38.624 ************************************ 00:07:38.624 08:41:55 -- common/autotest_common.sh@1111 -- # dm_mount 00:07:38.624 08:41:55 -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:38.625 08:41:55 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:38.625 08:41:55 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:38.625 08:41:55 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:38.625 08:41:55 -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:38.625 08:41:55 -- setup/common.sh@40 -- # local part_no=2 00:07:38.625 08:41:55 -- setup/common.sh@41 -- # local size=1073741824 00:07:38.625 08:41:55 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:38.625 08:41:55 -- setup/common.sh@44 -- # parts=() 00:07:38.625 08:41:55 -- setup/common.sh@44 -- # local parts 00:07:38.625 08:41:55 -- setup/common.sh@46 -- # (( part = 1 )) 00:07:38.625 08:41:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:38.625 08:41:55 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:38.625 08:41:55 -- setup/common.sh@46 -- # (( part++ )) 00:07:38.625 08:41:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:38.625 08:41:55 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:38.625 08:41:55 -- setup/common.sh@46 -- # (( part++ )) 00:07:38.625 08:41:55 -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:38.625 08:41:55 -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:38.625 08:41:55 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:38.625 08:41:55 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:39.562 Creating new GPT entries in memory. 00:07:39.562 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:39.562 other utilities. 00:07:39.562 08:41:56 -- setup/common.sh@57 -- # (( part = 1 )) 00:07:39.562 08:41:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:39.562 08:41:56 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:39.562 08:41:56 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:39.562 08:41:56 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:40.939 Creating new GPT entries in memory. 00:07:40.940 The operation has completed successfully. 00:07:40.940 08:41:57 -- setup/common.sh@57 -- # (( part++ )) 00:07:40.940 08:41:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:40.940 08:41:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:40.940 08:41:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:40.940 08:41:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:07:41.875 The operation has completed successfully. 00:07:41.876 08:41:58 -- setup/common.sh@57 -- # (( part++ )) 00:07:41.876 08:41:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:41.876 08:41:58 -- setup/common.sh@62 -- # wait 1893934 00:07:41.876 08:41:58 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:41.876 08:41:58 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:41.876 08:41:58 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:41.876 08:41:58 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:41.876 08:41:58 -- setup/devices.sh@160 -- # for t in {1..5} 00:07:41.876 08:41:58 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:41.876 08:41:58 -- setup/devices.sh@161 -- # break 00:07:41.876 08:41:58 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:41.876 08:41:58 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:41.876 08:41:58 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:41.876 08:41:58 -- setup/devices.sh@166 -- # dm=dm-0 00:07:41.876 08:41:58 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:41.876 08:41:58 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:41.876 08:41:58 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:41.876 08:41:58 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:07:41.876 08:41:58 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:41.876 08:41:58 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:41.876 08:41:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:41.876 08:41:58 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:41.876 08:41:58 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:41.876 08:41:58 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:41.876 08:41:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:41.876 08:41:58 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:41.876 08:41:58 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:41.876 08:41:58 -- setup/devices.sh@53 -- # local found=0 00:07:41.876 08:41:58 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:41.876 08:41:58 -- setup/devices.sh@56 -- # : 00:07:41.876 08:41:58 -- setup/devices.sh@59 -- # local pci status 00:07:41.876 08:41:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:41.876 08:41:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:41.876 08:41:58 -- setup/devices.sh@47 -- # setup output config 00:07:41.876 08:41:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:41.876 08:41:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:45.165 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.165 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.165 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.165 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.165 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.165 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.165 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.165 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.165 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.165 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.165 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.165 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.165 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.165 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.166 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.166 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.166 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.166 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.166 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.166 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.166 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.166 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.166 08:42:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:02 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:45.166 08:42:02 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:45.166 08:42:02 -- setup/devices.sh@63 -- # found=1 00:07:45.166 08:42:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:45.166 08:42:02 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:07:45.166 08:42:02 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:45.166 08:42:02 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:45.166 08:42:02 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:45.166 08:42:02 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:45.166 08:42:02 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:45.166 08:42:02 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:07:45.166 08:42:02 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:45.166 08:42:02 -- setup/devices.sh@50 -- # local mount_point= 00:07:45.166 08:42:02 -- setup/devices.sh@51 -- # local test_file= 00:07:45.166 08:42:02 -- setup/devices.sh@53 -- # local found=0 00:07:45.166 08:42:02 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:45.166 08:42:02 -- setup/devices.sh@59 -- # local pci status 00:07:45.166 08:42:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:45.166 08:42:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:07:45.166 08:42:02 -- setup/devices.sh@47 -- # setup output config 00:07:45.166 08:42:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:07:45.166 08:42:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:07:48.493 08:42:05 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:48.493 08:42:05 -- setup/devices.sh@63 -- # found=1 00:07:48.493 08:42:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:48.493 08:42:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:48.493 08:42:05 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:48.493 08:42:05 -- setup/devices.sh@68 -- # return 0 00:07:48.493 08:42:05 -- setup/devices.sh@187 -- # cleanup_dm 00:07:48.494 08:42:05 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:48.494 08:42:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:48.494 08:42:05 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:48.494 08:42:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:48.494 08:42:05 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:48.494 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:48.494 08:42:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:48.494 08:42:05 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:48.494 00:07:48.494 real 0m9.693s 00:07:48.494 user 0m2.330s 00:07:48.494 sys 0m4.418s 00:07:48.494 08:42:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.494 08:42:05 -- common/autotest_common.sh@10 -- # set +x 00:07:48.494 ************************************ 00:07:48.494 END TEST dm_mount 00:07:48.494 ************************************ 00:07:48.494 08:42:05 -- setup/devices.sh@1 -- # cleanup 00:07:48.494 08:42:05 -- setup/devices.sh@11 -- # cleanup_nvme 00:07:48.494 08:42:05 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:07:48.494 08:42:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:48.494 08:42:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:48.494 08:42:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:48.494 08:42:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:48.753 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:48.753 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:07:48.753 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:48.753 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:48.753 08:42:05 -- setup/devices.sh@12 -- # cleanup_dm 00:07:48.753 08:42:05 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:07:48.753 08:42:05 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:48.753 08:42:05 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:48.753 08:42:05 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:48.753 08:42:05 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:48.753 08:42:05 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:48.753 00:07:48.753 real 0m26.053s 00:07:48.753 user 0m6.951s 00:07:48.753 sys 0m13.825s 00:07:48.753 08:42:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.753 08:42:05 -- common/autotest_common.sh@10 -- # set +x 00:07:48.753 ************************************ 00:07:48.753 END TEST devices 00:07:48.753 ************************************ 00:07:48.753 00:07:48.753 real 1m31.486s 00:07:48.753 user 0m27.047s 00:07:48.753 sys 0m52.265s 00:07:48.753 08:42:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.753 08:42:05 -- common/autotest_common.sh@10 -- # set +x 00:07:48.753 ************************************ 00:07:48.753 END TEST setup.sh 00:07:48.753 ************************************ 00:07:48.753 08:42:05 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:52.044 Hugepages 00:07:52.044 node hugesize free / total 00:07:52.044 node0 1048576kB 0 / 0 00:07:52.044 node0 2048kB 2048 / 2048 00:07:52.044 node1 1048576kB 0 / 0 00:07:52.044 node1 2048kB 0 / 0 00:07:52.044 00:07:52.044 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:52.044 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:52.044 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:52.044 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:52.044 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:52.044 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:52.044 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:52.044 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:52.044 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:52.044 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:52.044 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:52.044 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:52.044 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:52.044 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:52.044 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:52.044 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:52.044 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:52.307 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:52.307 08:42:09 -- spdk/autotest.sh@130 -- # uname -s 00:07:52.307 08:42:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:07:52.307 08:42:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:07:52.307 08:42:09 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:55.594 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:55.594 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:56.967 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:07:56.967 08:42:14 -- common/autotest_common.sh@1518 -- # sleep 1 00:07:57.904 08:42:15 -- common/autotest_common.sh@1519 -- # bdfs=() 00:07:57.904 08:42:15 -- common/autotest_common.sh@1519 -- # local bdfs 00:07:57.904 08:42:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:57.904 08:42:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:57.904 08:42:15 -- common/autotest_common.sh@1499 -- # bdfs=() 00:07:57.904 08:42:15 -- common/autotest_common.sh@1499 -- # local bdfs 00:07:57.904 08:42:15 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:58.162 08:42:15 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:58.162 08:42:15 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:07:58.162 08:42:15 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:07:58.162 08:42:15 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:d8:00.0 00:07:58.162 08:42:15 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:01.444 Waiting for block devices as requested 00:08:01.444 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:01.444 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:01.702 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:01.702 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:01.702 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:01.702 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:01.960 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:01.960 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:01.960 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:02.219 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:02.219 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:02.219 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:02.477 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:02.477 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:02.477 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:02.745 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:02.745 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:03.028 08:42:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:03.028 08:42:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:08:03.028 08:42:19 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:08:03.028 08:42:19 -- common/autotest_common.sh@1488 -- # grep 0000:d8:00.0/nvme/nvme 00:08:03.028 08:42:19 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:08:03.028 08:42:20 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:08:03.028 08:42:20 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:08:03.028 08:42:20 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:08:03.028 08:42:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:03.028 08:42:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:03.028 08:42:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:03.028 08:42:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:03.028 08:42:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:03.028 08:42:20 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:08:03.028 08:42:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:03.028 08:42:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:03.028 08:42:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:03.028 08:42:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:03.028 08:42:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:03.028 08:42:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:03.028 08:42:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:03.028 08:42:20 -- common/autotest_common.sh@1543 -- # continue 00:08:03.028 08:42:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:08:03.028 08:42:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:03.028 08:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:03.028 08:42:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:08:03.028 08:42:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:03.028 08:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:03.028 08:42:20 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:06.310 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:06.310 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:08.208 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:08:08.208 08:42:25 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:08:08.208 08:42:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:08.208 08:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:08.208 08:42:25 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:08:08.208 08:42:25 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:08:08.208 08:42:25 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:08:08.208 08:42:25 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:08.208 08:42:25 -- common/autotest_common.sh@1563 -- # local bdfs 00:08:08.208 08:42:25 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:08:08.208 08:42:25 -- common/autotest_common.sh@1499 -- # bdfs=() 00:08:08.209 08:42:25 -- common/autotest_common.sh@1499 -- # local bdfs 00:08:08.209 08:42:25 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:08.209 08:42:25 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:08.209 08:42:25 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:08:08.209 08:42:25 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:08:08.209 08:42:25 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:d8:00.0 00:08:08.209 08:42:25 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:08:08.209 08:42:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:08:08.209 08:42:25 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:08:08.209 08:42:25 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:08.209 08:42:25 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:08:08.209 08:42:25 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:d8:00.0 00:08:08.209 08:42:25 -- common/autotest_common.sh@1578 -- # [[ -z 0000:d8:00.0 ]] 00:08:08.209 08:42:25 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=1904282 00:08:08.209 08:42:25 -- common/autotest_common.sh@1584 -- # waitforlisten 1904282 00:08:08.209 08:42:25 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:08.209 08:42:25 -- common/autotest_common.sh@817 -- # '[' -z 1904282 ']' 00:08:08.209 08:42:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.209 08:42:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:08.209 08:42:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.209 08:42:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:08.209 08:42:25 -- common/autotest_common.sh@10 -- # set +x 00:08:08.467 [2024-04-26 08:42:25.500253] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:08:08.467 [2024-04-26 08:42:25.500304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904282 ] 00:08:08.467 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.467 [2024-04-26 08:42:25.571035] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.467 [2024-04-26 08:42:25.639324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.399 08:42:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:09.399 08:42:26 -- common/autotest_common.sh@850 -- # return 0 00:08:09.399 08:42:26 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:08:09.400 08:42:26 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:08:09.400 08:42:26 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:08:12.680 nvme0n1 00:08:12.680 08:42:29 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:12.680 [2024-04-26 08:42:29.437408] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:08:12.680 request: 00:08:12.680 { 00:08:12.680 "nvme_ctrlr_name": "nvme0", 00:08:12.680 "password": "test", 00:08:12.680 "method": "bdev_nvme_opal_revert", 00:08:12.680 "req_id": 1 00:08:12.680 } 00:08:12.680 Got JSON-RPC error response 00:08:12.680 response: 00:08:12.680 { 00:08:12.680 "code": -32602, 00:08:12.680 "message": "Invalid parameters" 00:08:12.680 } 00:08:12.680 08:42:29 -- common/autotest_common.sh@1590 -- # true 00:08:12.680 08:42:29 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:08:12.680 08:42:29 -- common/autotest_common.sh@1594 -- # killprocess 1904282 00:08:12.680 08:42:29 -- common/autotest_common.sh@936 -- # '[' -z 1904282 ']' 00:08:12.680 08:42:29 -- common/autotest_common.sh@940 -- # kill -0 1904282 00:08:12.680 08:42:29 -- common/autotest_common.sh@941 -- # uname 00:08:12.680 08:42:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:12.680 08:42:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1904282 00:08:12.680 08:42:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:12.680 08:42:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:12.680 08:42:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1904282' 00:08:12.680 killing process with pid 1904282 00:08:12.680 08:42:29 -- common/autotest_common.sh@955 -- # kill 1904282 00:08:12.680 08:42:29 -- common/autotest_common.sh@960 -- # wait 1904282 00:08:14.595 08:42:31 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:08:14.595 08:42:31 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:08:14.595 08:42:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:14.595 08:42:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:14.596 08:42:31 -- spdk/autotest.sh@162 -- # timing_enter lib 00:08:14.596 08:42:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:14.596 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:08:14.596 08:42:31 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:14.596 08:42:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.596 08:42:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.596 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:08:14.596 ************************************ 00:08:14.596 START TEST env 00:08:14.596 ************************************ 00:08:14.596 08:42:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:14.855 * Looking for test storage... 00:08:14.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:14.855 08:42:31 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:14.855 08:42:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.855 08:42:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.855 08:42:31 -- common/autotest_common.sh@10 -- # set +x 00:08:14.855 ************************************ 00:08:14.855 START TEST env_memory 00:08:14.855 ************************************ 00:08:14.855 08:42:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:14.855 00:08:14.855 00:08:14.855 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.855 http://cunit.sourceforge.net/ 00:08:14.855 00:08:14.855 00:08:14.855 Suite: memory 00:08:14.855 Test: alloc and free memory map ...[2024-04-26 08:42:32.088603] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:14.855 passed 00:08:15.113 Test: mem map translation ...[2024-04-26 08:42:32.106472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:15.113 [2024-04-26 08:42:32.106488] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:15.113 [2024-04-26 08:42:32.106525] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:15.113 [2024-04-26 08:42:32.106533] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:15.113 passed 00:08:15.113 Test: mem map registration ...[2024-04-26 08:42:32.143016] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:15.113 [2024-04-26 08:42:32.143031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:15.113 passed 00:08:15.113 Test: mem map adjacent registrations ...passed 00:08:15.113 00:08:15.113 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.113 suites 1 1 n/a 0 0 00:08:15.113 tests 4 4 4 0 0 00:08:15.113 asserts 152 152 152 0 n/a 00:08:15.113 00:08:15.113 Elapsed time = 0.132 seconds 00:08:15.113 00:08:15.113 real 0m0.146s 00:08:15.113 user 0m0.135s 00:08:15.113 sys 0m0.011s 00:08:15.113 08:42:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:15.113 08:42:32 -- common/autotest_common.sh@10 -- # set +x 00:08:15.113 ************************************ 00:08:15.113 END TEST env_memory 00:08:15.113 ************************************ 00:08:15.113 08:42:32 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:15.113 08:42:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:15.113 08:42:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.113 08:42:32 -- common/autotest_common.sh@10 -- # set +x 00:08:15.372 ************************************ 00:08:15.372 START TEST env_vtophys 00:08:15.372 ************************************ 00:08:15.372 08:42:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:15.372 EAL: lib.eal log level changed from notice to debug 00:08:15.372 EAL: Detected lcore 0 as core 0 on socket 0 00:08:15.372 EAL: Detected lcore 1 as core 1 on socket 0 00:08:15.372 EAL: Detected lcore 2 as core 2 on socket 0 00:08:15.372 EAL: Detected lcore 3 as core 3 on socket 0 00:08:15.372 EAL: Detected lcore 4 as core 4 on socket 0 00:08:15.372 EAL: Detected lcore 5 as core 5 on socket 0 00:08:15.372 EAL: Detected lcore 6 as core 6 on socket 0 00:08:15.372 EAL: Detected lcore 7 as core 8 on socket 0 00:08:15.372 EAL: Detected lcore 8 as core 9 on socket 0 00:08:15.372 EAL: Detected lcore 9 as core 10 on socket 0 00:08:15.372 EAL: Detected lcore 10 as core 11 on socket 0 00:08:15.372 EAL: Detected lcore 11 as core 12 on socket 0 00:08:15.372 EAL: Detected lcore 12 as core 13 on socket 0 00:08:15.372 EAL: Detected lcore 13 as core 14 on socket 0 00:08:15.372 EAL: Detected lcore 14 as core 16 on socket 0 00:08:15.372 EAL: Detected lcore 15 as core 17 on socket 0 00:08:15.372 EAL: Detected lcore 16 as core 18 on socket 0 00:08:15.372 EAL: Detected lcore 17 as core 19 on socket 0 00:08:15.372 EAL: Detected lcore 18 as core 20 on socket 0 00:08:15.372 EAL: Detected lcore 19 as core 21 on socket 0 00:08:15.372 EAL: Detected lcore 20 as core 22 on socket 0 00:08:15.372 EAL: Detected lcore 21 as core 24 on socket 0 00:08:15.372 EAL: Detected lcore 22 as core 25 on socket 0 00:08:15.372 EAL: Detected lcore 23 as core 26 on socket 0 00:08:15.372 EAL: Detected lcore 24 as core 27 on socket 0 00:08:15.372 EAL: Detected lcore 25 as core 28 on socket 0 00:08:15.372 EAL: Detected lcore 26 as core 29 on socket 0 00:08:15.372 EAL: Detected lcore 27 as core 30 on socket 0 00:08:15.372 EAL: Detected lcore 28 as core 0 on socket 1 00:08:15.372 EAL: Detected lcore 29 as core 1 on socket 1 00:08:15.372 EAL: Detected lcore 30 as core 2 on socket 1 00:08:15.372 EAL: Detected lcore 31 as core 3 on socket 1 00:08:15.372 EAL: Detected lcore 32 as core 4 on socket 1 00:08:15.372 EAL: Detected lcore 33 as core 5 on socket 1 00:08:15.372 EAL: Detected lcore 34 as core 6 on socket 1 00:08:15.372 EAL: Detected lcore 35 as core 8 on socket 1 00:08:15.372 EAL: Detected lcore 36 as core 9 on socket 1 00:08:15.372 EAL: Detected lcore 37 as core 10 on socket 1 00:08:15.372 EAL: Detected lcore 38 as core 11 on socket 1 00:08:15.372 EAL: Detected lcore 39 as core 12 on socket 1 00:08:15.372 EAL: Detected lcore 40 as core 13 on socket 1 00:08:15.372 EAL: Detected lcore 41 as core 14 on socket 1 00:08:15.372 EAL: Detected lcore 42 as core 16 on socket 1 00:08:15.372 EAL: Detected lcore 43 as core 17 on socket 1 00:08:15.372 EAL: Detected lcore 44 as core 18 on socket 1 00:08:15.372 EAL: Detected lcore 45 as core 19 on socket 1 00:08:15.372 EAL: Detected lcore 46 as core 20 on socket 1 00:08:15.372 EAL: Detected lcore 47 as core 21 on socket 1 00:08:15.372 EAL: Detected lcore 48 as core 22 on socket 1 00:08:15.372 EAL: Detected lcore 49 as core 24 on socket 1 00:08:15.372 EAL: Detected lcore 50 as core 25 on socket 1 00:08:15.372 EAL: Detected lcore 51 as core 26 on socket 1 00:08:15.372 EAL: Detected lcore 52 as core 27 on socket 1 00:08:15.372 EAL: Detected lcore 53 as core 28 on socket 1 00:08:15.372 EAL: Detected lcore 54 as core 29 on socket 1 00:08:15.372 EAL: Detected lcore 55 as core 30 on socket 1 00:08:15.372 EAL: Detected lcore 56 as core 0 on socket 0 00:08:15.372 EAL: Detected lcore 57 as core 1 on socket 0 00:08:15.372 EAL: Detected lcore 58 as core 2 on socket 0 00:08:15.372 EAL: Detected lcore 59 as core 3 on socket 0 00:08:15.372 EAL: Detected lcore 60 as core 4 on socket 0 00:08:15.372 EAL: Detected lcore 61 as core 5 on socket 0 00:08:15.372 EAL: Detected lcore 62 as core 6 on socket 0 00:08:15.372 EAL: Detected lcore 63 as core 8 on socket 0 00:08:15.372 EAL: Detected lcore 64 as core 9 on socket 0 00:08:15.372 EAL: Detected lcore 65 as core 10 on socket 0 00:08:15.372 EAL: Detected lcore 66 as core 11 on socket 0 00:08:15.372 EAL: Detected lcore 67 as core 12 on socket 0 00:08:15.372 EAL: Detected lcore 68 as core 13 on socket 0 00:08:15.372 EAL: Detected lcore 69 as core 14 on socket 0 00:08:15.372 EAL: Detected lcore 70 as core 16 on socket 0 00:08:15.372 EAL: Detected lcore 71 as core 17 on socket 0 00:08:15.372 EAL: Detected lcore 72 as core 18 on socket 0 00:08:15.372 EAL: Detected lcore 73 as core 19 on socket 0 00:08:15.372 EAL: Detected lcore 74 as core 20 on socket 0 00:08:15.372 EAL: Detected lcore 75 as core 21 on socket 0 00:08:15.372 EAL: Detected lcore 76 as core 22 on socket 0 00:08:15.372 EAL: Detected lcore 77 as core 24 on socket 0 00:08:15.372 EAL: Detected lcore 78 as core 25 on socket 0 00:08:15.372 EAL: Detected lcore 79 as core 26 on socket 0 00:08:15.372 EAL: Detected lcore 80 as core 27 on socket 0 00:08:15.372 EAL: Detected lcore 81 as core 28 on socket 0 00:08:15.372 EAL: Detected lcore 82 as core 29 on socket 0 00:08:15.372 EAL: Detected lcore 83 as core 30 on socket 0 00:08:15.372 EAL: Detected lcore 84 as core 0 on socket 1 00:08:15.372 EAL: Detected lcore 85 as core 1 on socket 1 00:08:15.372 EAL: Detected lcore 86 as core 2 on socket 1 00:08:15.372 EAL: Detected lcore 87 as core 3 on socket 1 00:08:15.372 EAL: Detected lcore 88 as core 4 on socket 1 00:08:15.372 EAL: Detected lcore 89 as core 5 on socket 1 00:08:15.372 EAL: Detected lcore 90 as core 6 on socket 1 00:08:15.372 EAL: Detected lcore 91 as core 8 on socket 1 00:08:15.372 EAL: Detected lcore 92 as core 9 on socket 1 00:08:15.372 EAL: Detected lcore 93 as core 10 on socket 1 00:08:15.372 EAL: Detected lcore 94 as core 11 on socket 1 00:08:15.372 EAL: Detected lcore 95 as core 12 on socket 1 00:08:15.372 EAL: Detected lcore 96 as core 13 on socket 1 00:08:15.372 EAL: Detected lcore 97 as core 14 on socket 1 00:08:15.372 EAL: Detected lcore 98 as core 16 on socket 1 00:08:15.372 EAL: Detected lcore 99 as core 17 on socket 1 00:08:15.372 EAL: Detected lcore 100 as core 18 on socket 1 00:08:15.372 EAL: Detected lcore 101 as core 19 on socket 1 00:08:15.372 EAL: Detected lcore 102 as core 20 on socket 1 00:08:15.372 EAL: Detected lcore 103 as core 21 on socket 1 00:08:15.372 EAL: Detected lcore 104 as core 22 on socket 1 00:08:15.372 EAL: Detected lcore 105 as core 24 on socket 1 00:08:15.372 EAL: Detected lcore 106 as core 25 on socket 1 00:08:15.372 EAL: Detected lcore 107 as core 26 on socket 1 00:08:15.372 EAL: Detected lcore 108 as core 27 on socket 1 00:08:15.372 EAL: Detected lcore 109 as core 28 on socket 1 00:08:15.372 EAL: Detected lcore 110 as core 29 on socket 1 00:08:15.372 EAL: Detected lcore 111 as core 30 on socket 1 00:08:15.372 EAL: Maximum logical cores by configuration: 128 00:08:15.372 EAL: Detected CPU lcores: 112 00:08:15.372 EAL: Detected NUMA nodes: 2 00:08:15.372 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:15.372 EAL: Detected shared linkage of DPDK 00:08:15.372 EAL: No shared files mode enabled, IPC will be disabled 00:08:15.372 EAL: Bus pci wants IOVA as 'DC' 00:08:15.372 EAL: Buses did not request a specific IOVA mode. 00:08:15.372 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:15.372 EAL: Selected IOVA mode 'VA' 00:08:15.372 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.372 EAL: Probing VFIO support... 00:08:15.372 EAL: IOMMU type 1 (Type 1) is supported 00:08:15.372 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:15.372 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:15.372 EAL: VFIO support initialized 00:08:15.372 EAL: Ask a virtual area of 0x2e000 bytes 00:08:15.372 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:15.372 EAL: Setting up physically contiguous memory... 00:08:15.372 EAL: Setting maximum number of open files to 524288 00:08:15.372 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:15.372 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:15.372 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:15.372 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.372 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:15.372 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:15.372 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.372 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:15.372 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:15.372 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.372 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:15.372 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:15.372 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.373 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:15.373 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:15.373 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.373 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:15.373 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:15.373 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.373 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:15.373 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:15.373 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.373 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:15.373 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:15.373 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.373 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:15.373 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:15.373 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:15.373 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.373 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:15.373 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:15.373 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.373 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:15.373 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:15.373 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.373 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:15.373 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:15.373 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.373 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:15.373 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:15.373 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.373 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:15.373 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:15.373 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.373 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:15.373 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:15.373 EAL: Ask a virtual area of 0x61000 bytes 00:08:15.373 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:15.373 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:15.373 EAL: Ask a virtual area of 0x400000000 bytes 00:08:15.373 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:15.373 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:15.373 EAL: Hugepages will be freed exactly as allocated. 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: TSC frequency is ~2500000 KHz 00:08:15.373 EAL: Main lcore 0 is ready (tid=7ffac8aa1a00;cpuset=[0]) 00:08:15.373 EAL: Trying to obtain current memory policy. 00:08:15.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.373 EAL: Restoring previous memory policy: 0 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was expanded by 2MB 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:15.373 EAL: Mem event callback 'spdk:(nil)' registered 00:08:15.373 00:08:15.373 00:08:15.373 CUnit - A unit testing framework for C - Version 2.1-3 00:08:15.373 http://cunit.sourceforge.net/ 00:08:15.373 00:08:15.373 00:08:15.373 Suite: components_suite 00:08:15.373 Test: vtophys_malloc_test ...passed 00:08:15.373 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:15.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.373 EAL: Restoring previous memory policy: 4 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was expanded by 4MB 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was shrunk by 4MB 00:08:15.373 EAL: Trying to obtain current memory policy. 00:08:15.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.373 EAL: Restoring previous memory policy: 4 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was expanded by 6MB 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was shrunk by 6MB 00:08:15.373 EAL: Trying to obtain current memory policy. 00:08:15.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.373 EAL: Restoring previous memory policy: 4 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was expanded by 10MB 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was shrunk by 10MB 00:08:15.373 EAL: Trying to obtain current memory policy. 00:08:15.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.373 EAL: Restoring previous memory policy: 4 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was expanded by 18MB 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was shrunk by 18MB 00:08:15.373 EAL: Trying to obtain current memory policy. 00:08:15.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.373 EAL: Restoring previous memory policy: 4 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was expanded by 34MB 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was shrunk by 34MB 00:08:15.373 EAL: Trying to obtain current memory policy. 00:08:15.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.373 EAL: Restoring previous memory policy: 4 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was expanded by 66MB 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was shrunk by 66MB 00:08:15.373 EAL: Trying to obtain current memory policy. 00:08:15.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.373 EAL: Restoring previous memory policy: 4 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was expanded by 130MB 00:08:15.373 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.373 EAL: request: mp_malloc_sync 00:08:15.373 EAL: No shared files mode enabled, IPC is disabled 00:08:15.373 EAL: Heap on socket 0 was shrunk by 130MB 00:08:15.373 EAL: Trying to obtain current memory policy. 00:08:15.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.640 EAL: Restoring previous memory policy: 4 00:08:15.640 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.640 EAL: request: mp_malloc_sync 00:08:15.640 EAL: No shared files mode enabled, IPC is disabled 00:08:15.640 EAL: Heap on socket 0 was expanded by 258MB 00:08:15.640 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.640 EAL: request: mp_malloc_sync 00:08:15.640 EAL: No shared files mode enabled, IPC is disabled 00:08:15.640 EAL: Heap on socket 0 was shrunk by 258MB 00:08:15.640 EAL: Trying to obtain current memory policy. 00:08:15.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:15.640 EAL: Restoring previous memory policy: 4 00:08:15.640 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.640 EAL: request: mp_malloc_sync 00:08:15.640 EAL: No shared files mode enabled, IPC is disabled 00:08:15.640 EAL: Heap on socket 0 was expanded by 514MB 00:08:15.900 EAL: Calling mem event callback 'spdk:(nil)' 00:08:15.900 EAL: request: mp_malloc_sync 00:08:15.900 EAL: No shared files mode enabled, IPC is disabled 00:08:15.900 EAL: Heap on socket 0 was shrunk by 514MB 00:08:15.900 EAL: Trying to obtain current memory policy. 00:08:15.900 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:16.157 EAL: Restoring previous memory policy: 4 00:08:16.157 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.157 EAL: request: mp_malloc_sync 00:08:16.157 EAL: No shared files mode enabled, IPC is disabled 00:08:16.157 EAL: Heap on socket 0 was expanded by 1026MB 00:08:16.157 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.415 EAL: request: mp_malloc_sync 00:08:16.415 EAL: No shared files mode enabled, IPC is disabled 00:08:16.415 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:16.415 passed 00:08:16.415 00:08:16.415 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.415 suites 1 1 n/a 0 0 00:08:16.415 tests 2 2 2 0 0 00:08:16.415 asserts 497 497 497 0 n/a 00:08:16.415 00:08:16.415 Elapsed time = 0.965 seconds 00:08:16.415 EAL: Calling mem event callback 'spdk:(nil)' 00:08:16.415 EAL: request: mp_malloc_sync 00:08:16.415 EAL: No shared files mode enabled, IPC is disabled 00:08:16.415 EAL: Heap on socket 0 was shrunk by 2MB 00:08:16.415 EAL: No shared files mode enabled, IPC is disabled 00:08:16.415 EAL: No shared files mode enabled, IPC is disabled 00:08:16.415 EAL: No shared files mode enabled, IPC is disabled 00:08:16.415 00:08:16.415 real 0m1.093s 00:08:16.415 user 0m0.630s 00:08:16.415 sys 0m0.433s 00:08:16.415 08:42:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:16.415 08:42:33 -- common/autotest_common.sh@10 -- # set +x 00:08:16.415 ************************************ 00:08:16.415 END TEST env_vtophys 00:08:16.415 ************************************ 00:08:16.415 08:42:33 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:16.415 08:42:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:16.415 08:42:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.415 08:42:33 -- common/autotest_common.sh@10 -- # set +x 00:08:16.673 ************************************ 00:08:16.673 START TEST env_pci 00:08:16.673 ************************************ 00:08:16.673 08:42:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:16.673 00:08:16.673 00:08:16.673 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.673 http://cunit.sourceforge.net/ 00:08:16.673 00:08:16.673 00:08:16.673 Suite: pci 00:08:16.673 Test: pci_hook ...[2024-04-26 08:42:33.690423] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1905845 has claimed it 00:08:16.673 EAL: Cannot find device (10000:00:01.0) 00:08:16.673 EAL: Failed to attach device on primary process 00:08:16.673 passed 00:08:16.673 00:08:16.673 Run Summary: Type Total Ran Passed Failed Inactive 00:08:16.673 suites 1 1 n/a 0 0 00:08:16.673 tests 1 1 1 0 0 00:08:16.673 asserts 25 25 25 0 n/a 00:08:16.673 00:08:16.673 Elapsed time = 0.035 seconds 00:08:16.673 00:08:16.673 real 0m0.058s 00:08:16.673 user 0m0.018s 00:08:16.673 sys 0m0.039s 00:08:16.673 08:42:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:16.673 08:42:33 -- common/autotest_common.sh@10 -- # set +x 00:08:16.673 ************************************ 00:08:16.673 END TEST env_pci 00:08:16.673 ************************************ 00:08:16.673 08:42:33 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:16.673 08:42:33 -- env/env.sh@15 -- # uname 00:08:16.673 08:42:33 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:16.673 08:42:33 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:16.673 08:42:33 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:16.673 08:42:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:16.673 08:42:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.673 08:42:33 -- common/autotest_common.sh@10 -- # set +x 00:08:16.673 ************************************ 00:08:16.673 START TEST env_dpdk_post_init 00:08:16.673 ************************************ 00:08:16.673 08:42:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:16.931 EAL: Detected CPU lcores: 112 00:08:16.931 EAL: Detected NUMA nodes: 2 00:08:16.931 EAL: Detected shared linkage of DPDK 00:08:16.931 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:16.931 EAL: Selected IOVA mode 'VA' 00:08:16.931 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.931 EAL: VFIO support initialized 00:08:16.931 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:16.931 EAL: Using IOMMU type 1 (Type 1) 00:08:16.931 EAL: Ignore mapping IO port bar(1) 00:08:16.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:08:16.931 EAL: Ignore mapping IO port bar(1) 00:08:16.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:08:16.931 EAL: Ignore mapping IO port bar(1) 00:08:16.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:08:16.931 EAL: Ignore mapping IO port bar(1) 00:08:16.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:08:16.931 EAL: Ignore mapping IO port bar(1) 00:08:16.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:08:16.931 EAL: Ignore mapping IO port bar(1) 00:08:16.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:08:16.931 EAL: Ignore mapping IO port bar(1) 00:08:16.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:08:16.931 EAL: Ignore mapping IO port bar(1) 00:08:16.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:08:16.931 EAL: Ignore mapping IO port bar(1) 00:08:16.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:08:16.931 EAL: Ignore mapping IO port bar(1) 00:08:16.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:08:16.931 EAL: Ignore mapping IO port bar(1) 00:08:16.931 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:08:17.189 EAL: Ignore mapping IO port bar(1) 00:08:17.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:08:17.189 EAL: Ignore mapping IO port bar(1) 00:08:17.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:08:17.189 EAL: Ignore mapping IO port bar(1) 00:08:17.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:08:17.189 EAL: Ignore mapping IO port bar(1) 00:08:17.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:08:17.189 EAL: Ignore mapping IO port bar(1) 00:08:17.189 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:08:17.778 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:08:21.958 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:08:21.958 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:08:21.958 Starting DPDK initialization... 00:08:21.958 Starting SPDK post initialization... 00:08:21.958 SPDK NVMe probe 00:08:21.958 Attaching to 0000:d8:00.0 00:08:21.958 Attached to 0000:d8:00.0 00:08:21.958 Cleaning up... 00:08:21.958 00:08:21.958 real 0m4.871s 00:08:21.958 user 0m3.553s 00:08:21.958 sys 0m0.377s 00:08:21.958 08:42:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:21.958 08:42:38 -- common/autotest_common.sh@10 -- # set +x 00:08:21.958 ************************************ 00:08:21.958 END TEST env_dpdk_post_init 00:08:21.958 ************************************ 00:08:21.958 08:42:38 -- env/env.sh@26 -- # uname 00:08:21.958 08:42:38 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:21.958 08:42:38 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:21.958 08:42:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:21.958 08:42:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.958 08:42:38 -- common/autotest_common.sh@10 -- # set +x 00:08:21.958 ************************************ 00:08:21.958 START TEST env_mem_callbacks 00:08:21.958 ************************************ 00:08:21.958 08:42:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:21.958 EAL: Detected CPU lcores: 112 00:08:21.958 EAL: Detected NUMA nodes: 2 00:08:21.958 EAL: Detected shared linkage of DPDK 00:08:21.958 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:21.958 EAL: Selected IOVA mode 'VA' 00:08:21.958 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.958 EAL: VFIO support initialized 00:08:21.958 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:21.958 00:08:21.958 00:08:21.958 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.958 http://cunit.sourceforge.net/ 00:08:21.958 00:08:21.958 00:08:21.958 Suite: memory 00:08:21.958 Test: test ... 00:08:21.958 register 0x200000200000 2097152 00:08:21.958 malloc 3145728 00:08:21.958 register 0x200000400000 4194304 00:08:21.958 buf 0x200000500000 len 3145728 PASSED 00:08:21.958 malloc 64 00:08:21.958 buf 0x2000004fff40 len 64 PASSED 00:08:21.958 malloc 4194304 00:08:21.958 register 0x200000800000 6291456 00:08:21.958 buf 0x200000a00000 len 4194304 PASSED 00:08:21.958 free 0x200000500000 3145728 00:08:21.958 free 0x2000004fff40 64 00:08:21.958 unregister 0x200000400000 4194304 PASSED 00:08:21.958 free 0x200000a00000 4194304 00:08:21.958 unregister 0x200000800000 6291456 PASSED 00:08:21.958 malloc 8388608 00:08:21.958 register 0x200000400000 10485760 00:08:21.958 buf 0x200000600000 len 8388608 PASSED 00:08:21.958 free 0x200000600000 8388608 00:08:21.958 unregister 0x200000400000 10485760 PASSED 00:08:21.958 passed 00:08:21.958 00:08:21.958 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.958 suites 1 1 n/a 0 0 00:08:21.958 tests 1 1 1 0 0 00:08:21.958 asserts 15 15 15 0 n/a 00:08:21.958 00:08:21.958 Elapsed time = 0.005 seconds 00:08:21.958 00:08:21.958 real 0m0.066s 00:08:21.958 user 0m0.018s 00:08:21.958 sys 0m0.047s 00:08:21.958 08:42:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:21.958 08:42:39 -- common/autotest_common.sh@10 -- # set +x 00:08:21.958 ************************************ 00:08:21.958 END TEST env_mem_callbacks 00:08:21.958 ************************************ 00:08:21.958 00:08:21.958 real 0m7.260s 00:08:21.958 user 0m4.710s 00:08:21.958 sys 0m1.513s 00:08:21.958 08:42:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:21.958 08:42:39 -- common/autotest_common.sh@10 -- # set +x 00:08:21.958 ************************************ 00:08:21.958 END TEST env 00:08:21.958 ************************************ 00:08:21.958 08:42:39 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:21.958 08:42:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:21.958 08:42:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:21.958 08:42:39 -- common/autotest_common.sh@10 -- # set +x 00:08:22.240 ************************************ 00:08:22.240 START TEST rpc 00:08:22.240 ************************************ 00:08:22.240 08:42:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:22.240 * Looking for test storage... 00:08:22.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:22.240 08:42:39 -- rpc/rpc.sh@65 -- # spdk_pid=1906960 00:08:22.240 08:42:39 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:22.240 08:42:39 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:22.240 08:42:39 -- rpc/rpc.sh@67 -- # waitforlisten 1906960 00:08:22.240 08:42:39 -- common/autotest_common.sh@817 -- # '[' -z 1906960 ']' 00:08:22.240 08:42:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.240 08:42:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:22.240 08:42:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.240 08:42:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:22.240 08:42:39 -- common/autotest_common.sh@10 -- # set +x 00:08:22.240 [2024-04-26 08:42:39.404770] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:08:22.240 [2024-04-26 08:42:39.404829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906960 ] 00:08:22.240 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.240 [2024-04-26 08:42:39.475473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.497 [2024-04-26 08:42:39.551142] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:22.497 [2024-04-26 08:42:39.551178] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1906960' to capture a snapshot of events at runtime. 00:08:22.497 [2024-04-26 08:42:39.551189] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.497 [2024-04-26 08:42:39.551198] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.497 [2024-04-26 08:42:39.551205] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1906960 for offline analysis/debug. 00:08:22.497 [2024-04-26 08:42:39.551232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.063 08:42:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:23.063 08:42:40 -- common/autotest_common.sh@850 -- # return 0 00:08:23.063 08:42:40 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:23.063 08:42:40 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:23.063 08:42:40 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:23.063 08:42:40 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:23.063 08:42:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:23.063 08:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.063 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.321 ************************************ 00:08:23.321 START TEST rpc_integrity 00:08:23.321 ************************************ 00:08:23.321 08:42:40 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:08:23.321 08:42:40 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:23.321 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.321 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.321 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.321 08:42:40 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:23.321 08:42:40 -- rpc/rpc.sh@13 -- # jq length 00:08:23.321 08:42:40 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:23.321 08:42:40 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:23.321 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.321 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.321 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.321 08:42:40 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:23.321 08:42:40 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:23.321 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.321 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.321 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.321 08:42:40 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:23.321 { 00:08:23.321 "name": "Malloc0", 00:08:23.321 "aliases": [ 00:08:23.321 "2a3c8122-cd7a-47ab-970c-9934b71cc229" 00:08:23.321 ], 00:08:23.321 "product_name": "Malloc disk", 00:08:23.321 "block_size": 512, 00:08:23.321 "num_blocks": 16384, 00:08:23.321 "uuid": "2a3c8122-cd7a-47ab-970c-9934b71cc229", 00:08:23.321 "assigned_rate_limits": { 00:08:23.321 "rw_ios_per_sec": 0, 00:08:23.321 "rw_mbytes_per_sec": 0, 00:08:23.321 "r_mbytes_per_sec": 0, 00:08:23.321 "w_mbytes_per_sec": 0 00:08:23.321 }, 00:08:23.321 "claimed": false, 00:08:23.321 "zoned": false, 00:08:23.321 "supported_io_types": { 00:08:23.321 "read": true, 00:08:23.321 "write": true, 00:08:23.321 "unmap": true, 00:08:23.321 "write_zeroes": true, 00:08:23.321 "flush": true, 00:08:23.321 "reset": true, 00:08:23.321 "compare": false, 00:08:23.321 "compare_and_write": false, 00:08:23.321 "abort": true, 00:08:23.321 "nvme_admin": false, 00:08:23.321 "nvme_io": false 00:08:23.321 }, 00:08:23.321 "memory_domains": [ 00:08:23.321 { 00:08:23.321 "dma_device_id": "system", 00:08:23.321 "dma_device_type": 1 00:08:23.321 }, 00:08:23.321 { 00:08:23.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.321 "dma_device_type": 2 00:08:23.321 } 00:08:23.321 ], 00:08:23.321 "driver_specific": {} 00:08:23.321 } 00:08:23.321 ]' 00:08:23.321 08:42:40 -- rpc/rpc.sh@17 -- # jq length 00:08:23.321 08:42:40 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:23.321 08:42:40 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:23.321 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.321 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.321 [2024-04-26 08:42:40.483747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:23.321 [2024-04-26 08:42:40.483775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.321 [2024-04-26 08:42:40.483789] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a3f360 00:08:23.321 [2024-04-26 08:42:40.483797] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.321 [2024-04-26 08:42:40.484866] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.321 [2024-04-26 08:42:40.484888] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:23.321 Passthru0 00:08:23.321 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.321 08:42:40 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:23.321 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.321 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.321 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.321 08:42:40 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:23.321 { 00:08:23.321 "name": "Malloc0", 00:08:23.321 "aliases": [ 00:08:23.321 "2a3c8122-cd7a-47ab-970c-9934b71cc229" 00:08:23.321 ], 00:08:23.321 "product_name": "Malloc disk", 00:08:23.321 "block_size": 512, 00:08:23.321 "num_blocks": 16384, 00:08:23.321 "uuid": "2a3c8122-cd7a-47ab-970c-9934b71cc229", 00:08:23.321 "assigned_rate_limits": { 00:08:23.321 "rw_ios_per_sec": 0, 00:08:23.321 "rw_mbytes_per_sec": 0, 00:08:23.321 "r_mbytes_per_sec": 0, 00:08:23.321 "w_mbytes_per_sec": 0 00:08:23.321 }, 00:08:23.321 "claimed": true, 00:08:23.321 "claim_type": "exclusive_write", 00:08:23.321 "zoned": false, 00:08:23.321 "supported_io_types": { 00:08:23.321 "read": true, 00:08:23.321 "write": true, 00:08:23.321 "unmap": true, 00:08:23.321 "write_zeroes": true, 00:08:23.321 "flush": true, 00:08:23.321 "reset": true, 00:08:23.321 "compare": false, 00:08:23.321 "compare_and_write": false, 00:08:23.321 "abort": true, 00:08:23.321 "nvme_admin": false, 00:08:23.321 "nvme_io": false 00:08:23.321 }, 00:08:23.321 "memory_domains": [ 00:08:23.321 { 00:08:23.321 "dma_device_id": "system", 00:08:23.321 "dma_device_type": 1 00:08:23.321 }, 00:08:23.321 { 00:08:23.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.321 "dma_device_type": 2 00:08:23.322 } 00:08:23.322 ], 00:08:23.322 "driver_specific": {} 00:08:23.322 }, 00:08:23.322 { 00:08:23.322 "name": "Passthru0", 00:08:23.322 "aliases": [ 00:08:23.322 "12349a3a-bc2a-5df9-a39a-431672de2c73" 00:08:23.322 ], 00:08:23.322 "product_name": "passthru", 00:08:23.322 "block_size": 512, 00:08:23.322 "num_blocks": 16384, 00:08:23.322 "uuid": "12349a3a-bc2a-5df9-a39a-431672de2c73", 00:08:23.322 "assigned_rate_limits": { 00:08:23.322 "rw_ios_per_sec": 0, 00:08:23.322 "rw_mbytes_per_sec": 0, 00:08:23.322 "r_mbytes_per_sec": 0, 00:08:23.322 "w_mbytes_per_sec": 0 00:08:23.322 }, 00:08:23.322 "claimed": false, 00:08:23.322 "zoned": false, 00:08:23.322 "supported_io_types": { 00:08:23.322 "read": true, 00:08:23.322 "write": true, 00:08:23.322 "unmap": true, 00:08:23.322 "write_zeroes": true, 00:08:23.322 "flush": true, 00:08:23.322 "reset": true, 00:08:23.322 "compare": false, 00:08:23.322 "compare_and_write": false, 00:08:23.322 "abort": true, 00:08:23.322 "nvme_admin": false, 00:08:23.322 "nvme_io": false 00:08:23.322 }, 00:08:23.322 "memory_domains": [ 00:08:23.322 { 00:08:23.322 "dma_device_id": "system", 00:08:23.322 "dma_device_type": 1 00:08:23.322 }, 00:08:23.322 { 00:08:23.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.322 "dma_device_type": 2 00:08:23.322 } 00:08:23.322 ], 00:08:23.322 "driver_specific": { 00:08:23.322 "passthru": { 00:08:23.322 "name": "Passthru0", 00:08:23.322 "base_bdev_name": "Malloc0" 00:08:23.322 } 00:08:23.322 } 00:08:23.322 } 00:08:23.322 ]' 00:08:23.322 08:42:40 -- rpc/rpc.sh@21 -- # jq length 00:08:23.322 08:42:40 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:23.322 08:42:40 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:23.322 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.322 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.322 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.322 08:42:40 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:23.322 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.322 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.322 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.322 08:42:40 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:23.322 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.322 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.322 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.322 08:42:40 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:23.322 08:42:40 -- rpc/rpc.sh@26 -- # jq length 00:08:23.579 08:42:40 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:23.579 00:08:23.579 real 0m0.259s 00:08:23.579 user 0m0.153s 00:08:23.579 sys 0m0.042s 00:08:23.579 08:42:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:23.579 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.579 ************************************ 00:08:23.579 END TEST rpc_integrity 00:08:23.579 ************************************ 00:08:23.579 08:42:40 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:23.579 08:42:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:23.579 08:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.579 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.579 ************************************ 00:08:23.579 START TEST rpc_plugins 00:08:23.579 ************************************ 00:08:23.579 08:42:40 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:08:23.579 08:42:40 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:23.579 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.579 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.579 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.837 08:42:40 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:23.837 08:42:40 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:23.837 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.837 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.837 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.837 08:42:40 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:23.837 { 00:08:23.837 "name": "Malloc1", 00:08:23.837 "aliases": [ 00:08:23.837 "3cc864bd-d82e-4eac-ba74-a43407598235" 00:08:23.837 ], 00:08:23.837 "product_name": "Malloc disk", 00:08:23.837 "block_size": 4096, 00:08:23.837 "num_blocks": 256, 00:08:23.837 "uuid": "3cc864bd-d82e-4eac-ba74-a43407598235", 00:08:23.837 "assigned_rate_limits": { 00:08:23.837 "rw_ios_per_sec": 0, 00:08:23.837 "rw_mbytes_per_sec": 0, 00:08:23.837 "r_mbytes_per_sec": 0, 00:08:23.837 "w_mbytes_per_sec": 0 00:08:23.837 }, 00:08:23.837 "claimed": false, 00:08:23.837 "zoned": false, 00:08:23.837 "supported_io_types": { 00:08:23.837 "read": true, 00:08:23.837 "write": true, 00:08:23.837 "unmap": true, 00:08:23.837 "write_zeroes": true, 00:08:23.837 "flush": true, 00:08:23.837 "reset": true, 00:08:23.837 "compare": false, 00:08:23.837 "compare_and_write": false, 00:08:23.837 "abort": true, 00:08:23.837 "nvme_admin": false, 00:08:23.837 "nvme_io": false 00:08:23.837 }, 00:08:23.837 "memory_domains": [ 00:08:23.837 { 00:08:23.837 "dma_device_id": "system", 00:08:23.837 "dma_device_type": 1 00:08:23.837 }, 00:08:23.837 { 00:08:23.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.837 "dma_device_type": 2 00:08:23.837 } 00:08:23.837 ], 00:08:23.837 "driver_specific": {} 00:08:23.837 } 00:08:23.837 ]' 00:08:23.837 08:42:40 -- rpc/rpc.sh@32 -- # jq length 00:08:23.837 08:42:40 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:23.837 08:42:40 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:23.837 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.837 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.837 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.837 08:42:40 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:23.837 08:42:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:23.837 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.837 08:42:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:23.837 08:42:40 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:23.837 08:42:40 -- rpc/rpc.sh@36 -- # jq length 00:08:23.837 08:42:40 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:23.837 00:08:23.837 real 0m0.146s 00:08:23.837 user 0m0.088s 00:08:23.837 sys 0m0.025s 00:08:23.837 08:42:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:23.837 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:23.837 ************************************ 00:08:23.837 END TEST rpc_plugins 00:08:23.837 ************************************ 00:08:23.837 08:42:40 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:23.837 08:42:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:23.837 08:42:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.837 08:42:40 -- common/autotest_common.sh@10 -- # set +x 00:08:24.094 ************************************ 00:08:24.094 START TEST rpc_trace_cmd_test 00:08:24.094 ************************************ 00:08:24.094 08:42:41 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:08:24.094 08:42:41 -- rpc/rpc.sh@40 -- # local info 00:08:24.094 08:42:41 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:24.094 08:42:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.094 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.094 08:42:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.094 08:42:41 -- rpc/rpc.sh@42 -- # info='{ 00:08:24.094 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1906960", 00:08:24.094 "tpoint_group_mask": "0x8", 00:08:24.094 "iscsi_conn": { 00:08:24.094 "mask": "0x2", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "scsi": { 00:08:24.094 "mask": "0x4", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "bdev": { 00:08:24.094 "mask": "0x8", 00:08:24.094 "tpoint_mask": "0xffffffffffffffff" 00:08:24.094 }, 00:08:24.094 "nvmf_rdma": { 00:08:24.094 "mask": "0x10", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "nvmf_tcp": { 00:08:24.094 "mask": "0x20", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "ftl": { 00:08:24.094 "mask": "0x40", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "blobfs": { 00:08:24.094 "mask": "0x80", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "dsa": { 00:08:24.094 "mask": "0x200", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "thread": { 00:08:24.094 "mask": "0x400", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "nvme_pcie": { 00:08:24.094 "mask": "0x800", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "iaa": { 00:08:24.094 "mask": "0x1000", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "nvme_tcp": { 00:08:24.094 "mask": "0x2000", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "bdev_nvme": { 00:08:24.094 "mask": "0x4000", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 }, 00:08:24.094 "sock": { 00:08:24.094 "mask": "0x8000", 00:08:24.094 "tpoint_mask": "0x0" 00:08:24.094 } 00:08:24.094 }' 00:08:24.094 08:42:41 -- rpc/rpc.sh@43 -- # jq length 00:08:24.094 08:42:41 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:08:24.095 08:42:41 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:24.095 08:42:41 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:24.095 08:42:41 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:24.095 08:42:41 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:24.095 08:42:41 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:24.095 08:42:41 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:24.095 08:42:41 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:24.095 08:42:41 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:24.095 00:08:24.095 real 0m0.177s 00:08:24.095 user 0m0.141s 00:08:24.095 sys 0m0.029s 00:08:24.095 08:42:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:24.095 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.095 ************************************ 00:08:24.095 END TEST rpc_trace_cmd_test 00:08:24.095 ************************************ 00:08:24.352 08:42:41 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:24.352 08:42:41 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:24.352 08:42:41 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:24.352 08:42:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:24.352 08:42:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.352 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.352 ************************************ 00:08:24.352 START TEST rpc_daemon_integrity 00:08:24.352 ************************************ 00:08:24.352 08:42:41 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:08:24.352 08:42:41 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:24.352 08:42:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.352 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.352 08:42:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.352 08:42:41 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:24.352 08:42:41 -- rpc/rpc.sh@13 -- # jq length 00:08:24.352 08:42:41 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:24.352 08:42:41 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:24.352 08:42:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.352 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.611 08:42:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.611 08:42:41 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:24.611 08:42:41 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:24.611 08:42:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.611 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.611 08:42:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.611 08:42:41 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:24.611 { 00:08:24.611 "name": "Malloc2", 00:08:24.611 "aliases": [ 00:08:24.611 "ca75c224-025a-4969-a41d-b0e7332ec7f9" 00:08:24.611 ], 00:08:24.611 "product_name": "Malloc disk", 00:08:24.611 "block_size": 512, 00:08:24.611 "num_blocks": 16384, 00:08:24.611 "uuid": "ca75c224-025a-4969-a41d-b0e7332ec7f9", 00:08:24.611 "assigned_rate_limits": { 00:08:24.611 "rw_ios_per_sec": 0, 00:08:24.611 "rw_mbytes_per_sec": 0, 00:08:24.611 "r_mbytes_per_sec": 0, 00:08:24.611 "w_mbytes_per_sec": 0 00:08:24.611 }, 00:08:24.611 "claimed": false, 00:08:24.611 "zoned": false, 00:08:24.611 "supported_io_types": { 00:08:24.611 "read": true, 00:08:24.611 "write": true, 00:08:24.611 "unmap": true, 00:08:24.611 "write_zeroes": true, 00:08:24.611 "flush": true, 00:08:24.611 "reset": true, 00:08:24.611 "compare": false, 00:08:24.611 "compare_and_write": false, 00:08:24.611 "abort": true, 00:08:24.611 "nvme_admin": false, 00:08:24.611 "nvme_io": false 00:08:24.611 }, 00:08:24.611 "memory_domains": [ 00:08:24.611 { 00:08:24.611 "dma_device_id": "system", 00:08:24.611 "dma_device_type": 1 00:08:24.611 }, 00:08:24.611 { 00:08:24.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.611 "dma_device_type": 2 00:08:24.611 } 00:08:24.612 ], 00:08:24.612 "driver_specific": {} 00:08:24.612 } 00:08:24.612 ]' 00:08:24.612 08:42:41 -- rpc/rpc.sh@17 -- # jq length 00:08:24.612 08:42:41 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:24.612 08:42:41 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:24.612 08:42:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.612 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.612 [2024-04-26 08:42:41.674978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:24.612 [2024-04-26 08:42:41.675005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.612 [2024-04-26 08:42:41.675020] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a3f0e0 00:08:24.612 [2024-04-26 08:42:41.675029] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.612 [2024-04-26 08:42:41.675943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.612 [2024-04-26 08:42:41.675965] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:24.612 Passthru0 00:08:24.612 08:42:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.612 08:42:41 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:24.612 08:42:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.612 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.612 08:42:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.612 08:42:41 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:24.612 { 00:08:24.612 "name": "Malloc2", 00:08:24.612 "aliases": [ 00:08:24.612 "ca75c224-025a-4969-a41d-b0e7332ec7f9" 00:08:24.612 ], 00:08:24.612 "product_name": "Malloc disk", 00:08:24.612 "block_size": 512, 00:08:24.612 "num_blocks": 16384, 00:08:24.612 "uuid": "ca75c224-025a-4969-a41d-b0e7332ec7f9", 00:08:24.612 "assigned_rate_limits": { 00:08:24.612 "rw_ios_per_sec": 0, 00:08:24.612 "rw_mbytes_per_sec": 0, 00:08:24.612 "r_mbytes_per_sec": 0, 00:08:24.612 "w_mbytes_per_sec": 0 00:08:24.612 }, 00:08:24.612 "claimed": true, 00:08:24.612 "claim_type": "exclusive_write", 00:08:24.612 "zoned": false, 00:08:24.612 "supported_io_types": { 00:08:24.612 "read": true, 00:08:24.612 "write": true, 00:08:24.612 "unmap": true, 00:08:24.612 "write_zeroes": true, 00:08:24.612 "flush": true, 00:08:24.612 "reset": true, 00:08:24.612 "compare": false, 00:08:24.612 "compare_and_write": false, 00:08:24.612 "abort": true, 00:08:24.612 "nvme_admin": false, 00:08:24.612 "nvme_io": false 00:08:24.612 }, 00:08:24.612 "memory_domains": [ 00:08:24.612 { 00:08:24.612 "dma_device_id": "system", 00:08:24.612 "dma_device_type": 1 00:08:24.612 }, 00:08:24.612 { 00:08:24.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.612 "dma_device_type": 2 00:08:24.612 } 00:08:24.612 ], 00:08:24.612 "driver_specific": {} 00:08:24.612 }, 00:08:24.612 { 00:08:24.612 "name": "Passthru0", 00:08:24.612 "aliases": [ 00:08:24.612 "30ef2de9-6c13-55f1-b1a1-9835f1658686" 00:08:24.612 ], 00:08:24.612 "product_name": "passthru", 00:08:24.612 "block_size": 512, 00:08:24.612 "num_blocks": 16384, 00:08:24.612 "uuid": "30ef2de9-6c13-55f1-b1a1-9835f1658686", 00:08:24.612 "assigned_rate_limits": { 00:08:24.612 "rw_ios_per_sec": 0, 00:08:24.612 "rw_mbytes_per_sec": 0, 00:08:24.612 "r_mbytes_per_sec": 0, 00:08:24.612 "w_mbytes_per_sec": 0 00:08:24.612 }, 00:08:24.612 "claimed": false, 00:08:24.612 "zoned": false, 00:08:24.612 "supported_io_types": { 00:08:24.612 "read": true, 00:08:24.612 "write": true, 00:08:24.612 "unmap": true, 00:08:24.612 "write_zeroes": true, 00:08:24.612 "flush": true, 00:08:24.612 "reset": true, 00:08:24.612 "compare": false, 00:08:24.612 "compare_and_write": false, 00:08:24.612 "abort": true, 00:08:24.612 "nvme_admin": false, 00:08:24.612 "nvme_io": false 00:08:24.612 }, 00:08:24.612 "memory_domains": [ 00:08:24.612 { 00:08:24.612 "dma_device_id": "system", 00:08:24.612 "dma_device_type": 1 00:08:24.612 }, 00:08:24.612 { 00:08:24.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.612 "dma_device_type": 2 00:08:24.612 } 00:08:24.612 ], 00:08:24.612 "driver_specific": { 00:08:24.612 "passthru": { 00:08:24.612 "name": "Passthru0", 00:08:24.612 "base_bdev_name": "Malloc2" 00:08:24.612 } 00:08:24.612 } 00:08:24.612 } 00:08:24.612 ]' 00:08:24.612 08:42:41 -- rpc/rpc.sh@21 -- # jq length 00:08:24.612 08:42:41 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:24.612 08:42:41 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:24.612 08:42:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.612 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.612 08:42:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.612 08:42:41 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:24.612 08:42:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.612 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.612 08:42:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.612 08:42:41 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:24.612 08:42:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.612 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.612 08:42:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.612 08:42:41 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:24.612 08:42:41 -- rpc/rpc.sh@26 -- # jq length 00:08:24.612 08:42:41 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:24.612 00:08:24.612 real 0m0.282s 00:08:24.612 user 0m0.164s 00:08:24.612 sys 0m0.055s 00:08:24.612 08:42:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:24.612 08:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:24.612 ************************************ 00:08:24.612 END TEST rpc_daemon_integrity 00:08:24.612 ************************************ 00:08:24.871 08:42:41 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:24.871 08:42:41 -- rpc/rpc.sh@84 -- # killprocess 1906960 00:08:24.871 08:42:41 -- common/autotest_common.sh@936 -- # '[' -z 1906960 ']' 00:08:24.871 08:42:41 -- common/autotest_common.sh@940 -- # kill -0 1906960 00:08:24.871 08:42:41 -- common/autotest_common.sh@941 -- # uname 00:08:24.871 08:42:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:24.871 08:42:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1906960 00:08:24.871 08:42:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:24.871 08:42:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:24.871 08:42:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1906960' 00:08:24.871 killing process with pid 1906960 00:08:24.871 08:42:41 -- common/autotest_common.sh@955 -- # kill 1906960 00:08:24.871 08:42:41 -- common/autotest_common.sh@960 -- # wait 1906960 00:08:25.128 00:08:25.128 real 0m3.004s 00:08:25.128 user 0m3.796s 00:08:25.128 sys 0m1.010s 00:08:25.128 08:42:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:25.128 08:42:42 -- common/autotest_common.sh@10 -- # set +x 00:08:25.128 ************************************ 00:08:25.128 END TEST rpc 00:08:25.128 ************************************ 00:08:25.128 08:42:42 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:25.128 08:42:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:25.128 08:42:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.128 08:42:42 -- common/autotest_common.sh@10 -- # set +x 00:08:25.386 ************************************ 00:08:25.386 START TEST skip_rpc 00:08:25.386 ************************************ 00:08:25.386 08:42:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:25.386 * Looking for test storage... 00:08:25.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:25.386 08:42:42 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:25.386 08:42:42 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:25.386 08:42:42 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:25.386 08:42:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:25.386 08:42:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.386 08:42:42 -- common/autotest_common.sh@10 -- # set +x 00:08:25.644 ************************************ 00:08:25.644 START TEST skip_rpc 00:08:25.644 ************************************ 00:08:25.644 08:42:42 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:08:25.644 08:42:42 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1907800 00:08:25.644 08:42:42 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:25.644 08:42:42 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:25.644 08:42:42 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:25.644 [2024-04-26 08:42:42.753690] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:08:25.644 [2024-04-26 08:42:42.753728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907800 ] 00:08:25.644 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.644 [2024-04-26 08:42:42.821342] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.644 [2024-04-26 08:42:42.889481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.908 08:42:47 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:30.908 08:42:47 -- common/autotest_common.sh@638 -- # local es=0 00:08:30.908 08:42:47 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:30.908 08:42:47 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:08:30.908 08:42:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:30.908 08:42:47 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:08:30.908 08:42:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:30.908 08:42:47 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:08:30.908 08:42:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.908 08:42:47 -- common/autotest_common.sh@10 -- # set +x 00:08:30.908 08:42:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:08:30.908 08:42:47 -- common/autotest_common.sh@641 -- # es=1 00:08:30.908 08:42:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:30.908 08:42:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:30.908 08:42:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:30.908 08:42:47 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:30.908 08:42:47 -- rpc/skip_rpc.sh@23 -- # killprocess 1907800 00:08:30.908 08:42:47 -- common/autotest_common.sh@936 -- # '[' -z 1907800 ']' 00:08:30.908 08:42:47 -- common/autotest_common.sh@940 -- # kill -0 1907800 00:08:30.908 08:42:47 -- common/autotest_common.sh@941 -- # uname 00:08:30.908 08:42:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:30.908 08:42:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1907800 00:08:30.908 08:42:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:30.908 08:42:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:30.908 08:42:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1907800' 00:08:30.908 killing process with pid 1907800 00:08:30.908 08:42:47 -- common/autotest_common.sh@955 -- # kill 1907800 00:08:30.908 08:42:47 -- common/autotest_common.sh@960 -- # wait 1907800 00:08:30.908 00:08:30.908 real 0m5.399s 00:08:30.908 user 0m5.160s 00:08:30.908 sys 0m0.283s 00:08:30.908 08:42:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:30.908 08:42:48 -- common/autotest_common.sh@10 -- # set +x 00:08:30.908 ************************************ 00:08:30.908 END TEST skip_rpc 00:08:30.908 ************************************ 00:08:30.908 08:42:48 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:30.908 08:42:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:30.908 08:42:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.908 08:42:48 -- common/autotest_common.sh@10 -- # set +x 00:08:31.166 ************************************ 00:08:31.166 START TEST skip_rpc_with_json 00:08:31.166 ************************************ 00:08:31.166 08:42:48 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:08:31.166 08:42:48 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:31.166 08:42:48 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1908724 00:08:31.166 08:42:48 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:31.166 08:42:48 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:31.166 08:42:48 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1908724 00:08:31.166 08:42:48 -- common/autotest_common.sh@817 -- # '[' -z 1908724 ']' 00:08:31.166 08:42:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.166 08:42:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:31.166 08:42:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.166 08:42:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:31.166 08:42:48 -- common/autotest_common.sh@10 -- # set +x 00:08:31.166 [2024-04-26 08:42:48.360734] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:08:31.166 [2024-04-26 08:42:48.360779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908724 ] 00:08:31.166 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.425 [2024-04-26 08:42:48.430882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.425 [2024-04-26 08:42:48.503633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.990 08:42:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:31.990 08:42:49 -- common/autotest_common.sh@850 -- # return 0 00:08:31.990 08:42:49 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:31.990 08:42:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.990 08:42:49 -- common/autotest_common.sh@10 -- # set +x 00:08:31.990 [2024-04-26 08:42:49.155841] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:31.990 request: 00:08:31.990 { 00:08:31.990 "trtype": "tcp", 00:08:31.990 "method": "nvmf_get_transports", 00:08:31.990 "req_id": 1 00:08:31.990 } 00:08:31.990 Got JSON-RPC error response 00:08:31.990 response: 00:08:31.990 { 00:08:31.990 "code": -19, 00:08:31.990 "message": "No such device" 00:08:31.990 } 00:08:31.990 08:42:49 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:08:31.990 08:42:49 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:31.990 08:42:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.990 08:42:49 -- common/autotest_common.sh@10 -- # set +x 00:08:31.990 [2024-04-26 08:42:49.167940] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.990 08:42:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:31.990 08:42:49 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:31.990 08:42:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:31.990 08:42:49 -- common/autotest_common.sh@10 -- # set +x 00:08:32.249 08:42:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.249 08:42:49 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:32.249 { 00:08:32.249 "subsystems": [ 00:08:32.249 { 00:08:32.249 "subsystem": "vfio_user_target", 00:08:32.249 "config": null 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "keyring", 00:08:32.249 "config": [] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "iobuf", 00:08:32.249 "config": [ 00:08:32.249 { 00:08:32.249 "method": "iobuf_set_options", 00:08:32.249 "params": { 00:08:32.249 "small_pool_count": 8192, 00:08:32.249 "large_pool_count": 1024, 00:08:32.249 "small_bufsize": 8192, 00:08:32.249 "large_bufsize": 135168 00:08:32.249 } 00:08:32.249 } 00:08:32.249 ] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "sock", 00:08:32.249 "config": [ 00:08:32.249 { 00:08:32.249 "method": "sock_impl_set_options", 00:08:32.249 "params": { 00:08:32.249 "impl_name": "posix", 00:08:32.249 "recv_buf_size": 2097152, 00:08:32.249 "send_buf_size": 2097152, 00:08:32.249 "enable_recv_pipe": true, 00:08:32.249 "enable_quickack": false, 00:08:32.249 "enable_placement_id": 0, 00:08:32.249 "enable_zerocopy_send_server": true, 00:08:32.249 "enable_zerocopy_send_client": false, 00:08:32.249 "zerocopy_threshold": 0, 00:08:32.249 "tls_version": 0, 00:08:32.249 "enable_ktls": false 00:08:32.249 } 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "method": "sock_impl_set_options", 00:08:32.249 "params": { 00:08:32.249 "impl_name": "ssl", 00:08:32.249 "recv_buf_size": 4096, 00:08:32.249 "send_buf_size": 4096, 00:08:32.249 "enable_recv_pipe": true, 00:08:32.249 "enable_quickack": false, 00:08:32.249 "enable_placement_id": 0, 00:08:32.249 "enable_zerocopy_send_server": true, 00:08:32.249 "enable_zerocopy_send_client": false, 00:08:32.249 "zerocopy_threshold": 0, 00:08:32.249 "tls_version": 0, 00:08:32.249 "enable_ktls": false 00:08:32.249 } 00:08:32.249 } 00:08:32.249 ] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "vmd", 00:08:32.249 "config": [] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "accel", 00:08:32.249 "config": [ 00:08:32.249 { 00:08:32.249 "method": "accel_set_options", 00:08:32.249 "params": { 00:08:32.249 "small_cache_size": 128, 00:08:32.249 "large_cache_size": 16, 00:08:32.249 "task_count": 2048, 00:08:32.249 "sequence_count": 2048, 00:08:32.249 "buf_count": 2048 00:08:32.249 } 00:08:32.249 } 00:08:32.249 ] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "bdev", 00:08:32.249 "config": [ 00:08:32.249 { 00:08:32.249 "method": "bdev_set_options", 00:08:32.249 "params": { 00:08:32.249 "bdev_io_pool_size": 65535, 00:08:32.249 "bdev_io_cache_size": 256, 00:08:32.249 "bdev_auto_examine": true, 00:08:32.249 "iobuf_small_cache_size": 128, 00:08:32.249 "iobuf_large_cache_size": 16 00:08:32.249 } 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "method": "bdev_raid_set_options", 00:08:32.249 "params": { 00:08:32.249 "process_window_size_kb": 1024 00:08:32.249 } 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "method": "bdev_iscsi_set_options", 00:08:32.249 "params": { 00:08:32.249 "timeout_sec": 30 00:08:32.249 } 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "method": "bdev_nvme_set_options", 00:08:32.249 "params": { 00:08:32.249 "action_on_timeout": "none", 00:08:32.249 "timeout_us": 0, 00:08:32.249 "timeout_admin_us": 0, 00:08:32.249 "keep_alive_timeout_ms": 10000, 00:08:32.249 "arbitration_burst": 0, 00:08:32.249 "low_priority_weight": 0, 00:08:32.249 "medium_priority_weight": 0, 00:08:32.249 "high_priority_weight": 0, 00:08:32.249 "nvme_adminq_poll_period_us": 10000, 00:08:32.249 "nvme_ioq_poll_period_us": 0, 00:08:32.249 "io_queue_requests": 0, 00:08:32.249 "delay_cmd_submit": true, 00:08:32.249 "transport_retry_count": 4, 00:08:32.249 "bdev_retry_count": 3, 00:08:32.249 "transport_ack_timeout": 0, 00:08:32.249 "ctrlr_loss_timeout_sec": 0, 00:08:32.249 "reconnect_delay_sec": 0, 00:08:32.249 "fast_io_fail_timeout_sec": 0, 00:08:32.249 "disable_auto_failback": false, 00:08:32.249 "generate_uuids": false, 00:08:32.249 "transport_tos": 0, 00:08:32.249 "nvme_error_stat": false, 00:08:32.249 "rdma_srq_size": 0, 00:08:32.249 "io_path_stat": false, 00:08:32.249 "allow_accel_sequence": false, 00:08:32.249 "rdma_max_cq_size": 0, 00:08:32.249 "rdma_cm_event_timeout_ms": 0, 00:08:32.249 "dhchap_digests": [ 00:08:32.249 "sha256", 00:08:32.249 "sha384", 00:08:32.249 "sha512" 00:08:32.249 ], 00:08:32.249 "dhchap_dhgroups": [ 00:08:32.249 "null", 00:08:32.249 "ffdhe2048", 00:08:32.249 "ffdhe3072", 00:08:32.249 "ffdhe4096", 00:08:32.249 "ffdhe6144", 00:08:32.249 "ffdhe8192" 00:08:32.249 ] 00:08:32.249 } 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "method": "bdev_nvme_set_hotplug", 00:08:32.249 "params": { 00:08:32.249 "period_us": 100000, 00:08:32.249 "enable": false 00:08:32.249 } 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "method": "bdev_wait_for_examine" 00:08:32.249 } 00:08:32.249 ] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "scsi", 00:08:32.249 "config": null 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "scheduler", 00:08:32.249 "config": [ 00:08:32.249 { 00:08:32.249 "method": "framework_set_scheduler", 00:08:32.249 "params": { 00:08:32.249 "name": "static" 00:08:32.249 } 00:08:32.249 } 00:08:32.249 ] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "vhost_scsi", 00:08:32.249 "config": [] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "vhost_blk", 00:08:32.249 "config": [] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "ublk", 00:08:32.249 "config": [] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "nbd", 00:08:32.249 "config": [] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "nvmf", 00:08:32.249 "config": [ 00:08:32.249 { 00:08:32.249 "method": "nvmf_set_config", 00:08:32.249 "params": { 00:08:32.249 "discovery_filter": "match_any", 00:08:32.249 "admin_cmd_passthru": { 00:08:32.249 "identify_ctrlr": false 00:08:32.249 } 00:08:32.249 } 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "method": "nvmf_set_max_subsystems", 00:08:32.249 "params": { 00:08:32.249 "max_subsystems": 1024 00:08:32.249 } 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "method": "nvmf_set_crdt", 00:08:32.249 "params": { 00:08:32.249 "crdt1": 0, 00:08:32.249 "crdt2": 0, 00:08:32.249 "crdt3": 0 00:08:32.249 } 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "method": "nvmf_create_transport", 00:08:32.249 "params": { 00:08:32.249 "trtype": "TCP", 00:08:32.249 "max_queue_depth": 128, 00:08:32.249 "max_io_qpairs_per_ctrlr": 127, 00:08:32.249 "in_capsule_data_size": 4096, 00:08:32.249 "max_io_size": 131072, 00:08:32.249 "io_unit_size": 131072, 00:08:32.249 "max_aq_depth": 128, 00:08:32.249 "num_shared_buffers": 511, 00:08:32.249 "buf_cache_size": 4294967295, 00:08:32.249 "dif_insert_or_strip": false, 00:08:32.249 "zcopy": false, 00:08:32.249 "c2h_success": true, 00:08:32.249 "sock_priority": 0, 00:08:32.249 "abort_timeout_sec": 1, 00:08:32.249 "ack_timeout": 0, 00:08:32.249 "data_wr_pool_size": 0 00:08:32.249 } 00:08:32.249 } 00:08:32.249 ] 00:08:32.249 }, 00:08:32.249 { 00:08:32.249 "subsystem": "iscsi", 00:08:32.249 "config": [ 00:08:32.249 { 00:08:32.249 "method": "iscsi_set_options", 00:08:32.249 "params": { 00:08:32.249 "node_base": "iqn.2016-06.io.spdk", 00:08:32.249 "max_sessions": 128, 00:08:32.249 "max_connections_per_session": 2, 00:08:32.249 "max_queue_depth": 64, 00:08:32.249 "default_time2wait": 2, 00:08:32.250 "default_time2retain": 20, 00:08:32.250 "first_burst_length": 8192, 00:08:32.250 "immediate_data": true, 00:08:32.250 "allow_duplicated_isid": false, 00:08:32.250 "error_recovery_level": 0, 00:08:32.250 "nop_timeout": 60, 00:08:32.250 "nop_in_interval": 30, 00:08:32.250 "disable_chap": false, 00:08:32.250 "require_chap": false, 00:08:32.250 "mutual_chap": false, 00:08:32.250 "chap_group": 0, 00:08:32.250 "max_large_datain_per_connection": 64, 00:08:32.250 "max_r2t_per_connection": 4, 00:08:32.250 "pdu_pool_size": 36864, 00:08:32.250 "immediate_data_pool_size": 16384, 00:08:32.250 "data_out_pool_size": 2048 00:08:32.250 } 00:08:32.250 } 00:08:32.250 ] 00:08:32.250 } 00:08:32.250 ] 00:08:32.250 } 00:08:32.250 08:42:49 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:32.250 08:42:49 -- rpc/skip_rpc.sh@40 -- # killprocess 1908724 00:08:32.250 08:42:49 -- common/autotest_common.sh@936 -- # '[' -z 1908724 ']' 00:08:32.250 08:42:49 -- common/autotest_common.sh@940 -- # kill -0 1908724 00:08:32.250 08:42:49 -- common/autotest_common.sh@941 -- # uname 00:08:32.250 08:42:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:32.250 08:42:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1908724 00:08:32.250 08:42:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:32.250 08:42:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:32.250 08:42:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1908724' 00:08:32.250 killing process with pid 1908724 00:08:32.250 08:42:49 -- common/autotest_common.sh@955 -- # kill 1908724 00:08:32.250 08:42:49 -- common/autotest_common.sh@960 -- # wait 1908724 00:08:32.510 08:42:49 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1908943 00:08:32.510 08:42:49 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:32.510 08:42:49 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:37.794 08:42:54 -- rpc/skip_rpc.sh@50 -- # killprocess 1908943 00:08:37.794 08:42:54 -- common/autotest_common.sh@936 -- # '[' -z 1908943 ']' 00:08:37.794 08:42:54 -- common/autotest_common.sh@940 -- # kill -0 1908943 00:08:37.794 08:42:54 -- common/autotest_common.sh@941 -- # uname 00:08:37.794 08:42:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:37.794 08:42:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1908943 00:08:37.794 08:42:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:37.794 08:42:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:37.794 08:42:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1908943' 00:08:37.794 killing process with pid 1908943 00:08:37.794 08:42:54 -- common/autotest_common.sh@955 -- # kill 1908943 00:08:37.794 08:42:54 -- common/autotest_common.sh@960 -- # wait 1908943 00:08:38.052 08:42:55 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:38.052 08:42:55 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:38.052 00:08:38.052 real 0m6.795s 00:08:38.052 user 0m6.577s 00:08:38.052 sys 0m0.644s 00:08:38.052 08:42:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:38.052 08:42:55 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 ************************************ 00:08:38.052 END TEST skip_rpc_with_json 00:08:38.052 ************************************ 00:08:38.052 08:42:55 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:38.052 08:42:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:38.052 08:42:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.052 08:42:55 -- common/autotest_common.sh@10 -- # set +x 00:08:38.052 ************************************ 00:08:38.052 START TEST skip_rpc_with_delay 00:08:38.052 ************************************ 00:08:38.052 08:42:55 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:08:38.052 08:42:55 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:38.052 08:42:55 -- common/autotest_common.sh@638 -- # local es=0 00:08:38.052 08:42:55 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:38.052 08:42:55 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:38.052 08:42:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:38.052 08:42:55 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:38.052 08:42:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:38.052 08:42:55 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:38.052 08:42:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:38.052 08:42:55 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:38.052 08:42:55 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:38.052 08:42:55 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:38.309 [2024-04-26 08:42:55.327422] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:38.309 [2024-04-26 08:42:55.327494] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:38.309 08:42:55 -- common/autotest_common.sh@641 -- # es=1 00:08:38.309 08:42:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:38.310 08:42:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:38.310 08:42:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:38.310 00:08:38.310 real 0m0.066s 00:08:38.310 user 0m0.041s 00:08:38.310 sys 0m0.025s 00:08:38.310 08:42:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:38.310 08:42:55 -- common/autotest_common.sh@10 -- # set +x 00:08:38.310 ************************************ 00:08:38.310 END TEST skip_rpc_with_delay 00:08:38.310 ************************************ 00:08:38.310 08:42:55 -- rpc/skip_rpc.sh@77 -- # uname 00:08:38.310 08:42:55 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:38.310 08:42:55 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:38.310 08:42:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:38.310 08:42:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.310 08:42:55 -- common/autotest_common.sh@10 -- # set +x 00:08:38.310 ************************************ 00:08:38.310 START TEST exit_on_failed_rpc_init 00:08:38.310 ************************************ 00:08:38.310 08:42:55 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:08:38.310 08:42:55 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1910052 00:08:38.310 08:42:55 -- rpc/skip_rpc.sh@63 -- # waitforlisten 1910052 00:08:38.310 08:42:55 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:38.310 08:42:55 -- common/autotest_common.sh@817 -- # '[' -z 1910052 ']' 00:08:38.310 08:42:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.310 08:42:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:38.310 08:42:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.310 08:42:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:38.310 08:42:55 -- common/autotest_common.sh@10 -- # set +x 00:08:38.567 [2024-04-26 08:42:55.570825] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:08:38.567 [2024-04-26 08:42:55.570868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910052 ] 00:08:38.567 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.567 [2024-04-26 08:42:55.638937] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.567 [2024-04-26 08:42:55.710210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.131 08:42:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:39.131 08:42:56 -- common/autotest_common.sh@850 -- # return 0 00:08:39.131 08:42:56 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:39.131 08:42:56 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:39.131 08:42:56 -- common/autotest_common.sh@638 -- # local es=0 00:08:39.131 08:42:56 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:39.131 08:42:56 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:39.131 08:42:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:39.131 08:42:56 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:39.131 08:42:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:39.131 08:42:56 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:39.131 08:42:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:39.131 08:42:56 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:39.131 08:42:56 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:39.131 08:42:56 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:39.389 [2024-04-26 08:42:56.386763] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:08:39.389 [2024-04-26 08:42:56.386810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910308 ] 00:08:39.389 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.389 [2024-04-26 08:42:56.450626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.389 [2024-04-26 08:42:56.518320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.389 [2024-04-26 08:42:56.518393] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:39.389 [2024-04-26 08:42:56.518405] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:39.389 [2024-04-26 08:42:56.518413] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.389 08:42:56 -- common/autotest_common.sh@641 -- # es=234 00:08:39.389 08:42:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:39.389 08:42:56 -- common/autotest_common.sh@650 -- # es=106 00:08:39.389 08:42:56 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:39.389 08:42:56 -- common/autotest_common.sh@658 -- # es=1 00:08:39.389 08:42:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:39.389 08:42:56 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:39.389 08:42:56 -- rpc/skip_rpc.sh@70 -- # killprocess 1910052 00:08:39.389 08:42:56 -- common/autotest_common.sh@936 -- # '[' -z 1910052 ']' 00:08:39.389 08:42:56 -- common/autotest_common.sh@940 -- # kill -0 1910052 00:08:39.389 08:42:56 -- common/autotest_common.sh@941 -- # uname 00:08:39.389 08:42:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:39.389 08:42:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1910052 00:08:39.647 08:42:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:39.647 08:42:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:39.647 08:42:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1910052' 00:08:39.647 killing process with pid 1910052 00:08:39.647 08:42:56 -- common/autotest_common.sh@955 -- # kill 1910052 00:08:39.647 08:42:56 -- common/autotest_common.sh@960 -- # wait 1910052 00:08:39.906 00:08:39.906 real 0m1.462s 00:08:39.906 user 0m1.636s 00:08:39.906 sys 0m0.432s 00:08:39.906 08:42:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:39.906 08:42:56 -- common/autotest_common.sh@10 -- # set +x 00:08:39.906 ************************************ 00:08:39.906 END TEST exit_on_failed_rpc_init 00:08:39.906 ************************************ 00:08:39.906 08:42:57 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:39.906 00:08:39.906 real 0m14.569s 00:08:39.906 user 0m13.707s 00:08:39.906 sys 0m1.877s 00:08:39.906 08:42:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:39.906 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:08:39.906 ************************************ 00:08:39.906 END TEST skip_rpc 00:08:39.906 ************************************ 00:08:39.906 08:42:57 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:39.906 08:42:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:39.906 08:42:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.906 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:08:40.164 ************************************ 00:08:40.164 START TEST rpc_client 00:08:40.164 ************************************ 00:08:40.164 08:42:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:40.164 * Looking for test storage... 00:08:40.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:40.164 08:42:57 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:40.164 OK 00:08:40.164 08:42:57 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:40.164 00:08:40.164 real 0m0.141s 00:08:40.164 user 0m0.047s 00:08:40.164 sys 0m0.105s 00:08:40.164 08:42:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:40.164 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:08:40.164 ************************************ 00:08:40.164 END TEST rpc_client 00:08:40.164 ************************************ 00:08:40.164 08:42:57 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:40.164 08:42:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:40.164 08:42:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.164 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:08:40.422 ************************************ 00:08:40.422 START TEST json_config 00:08:40.422 ************************************ 00:08:40.422 08:42:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:40.422 08:42:57 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.422 08:42:57 -- nvmf/common.sh@7 -- # uname -s 00:08:40.422 08:42:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.422 08:42:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.422 08:42:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.422 08:42:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.422 08:42:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.422 08:42:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.422 08:42:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.422 08:42:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.422 08:42:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.422 08:42:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.422 08:42:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:40.422 08:42:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:40.422 08:42:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.422 08:42:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.422 08:42:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:40.422 08:42:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.422 08:42:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.680 08:42:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.680 08:42:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.680 08:42:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.680 08:42:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.680 08:42:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.680 08:42:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.680 08:42:57 -- paths/export.sh@5 -- # export PATH 00:08:40.680 08:42:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.680 08:42:57 -- nvmf/common.sh@47 -- # : 0 00:08:40.680 08:42:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.680 08:42:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.680 08:42:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.680 08:42:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.680 08:42:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.680 08:42:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.680 08:42:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.680 08:42:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.680 08:42:57 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:40.680 08:42:57 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:40.680 08:42:57 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:40.680 08:42:57 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:40.680 08:42:57 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:40.680 08:42:57 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:40.680 08:42:57 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:40.680 08:42:57 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:40.680 08:42:57 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:40.680 08:42:57 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:40.680 08:42:57 -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:40.680 08:42:57 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:40.680 08:42:57 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:40.680 08:42:57 -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:40.680 08:42:57 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:40.680 08:42:57 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:08:40.680 INFO: JSON configuration test init 00:08:40.680 08:42:57 -- json_config/json_config.sh@357 -- # json_config_test_init 00:08:40.680 08:42:57 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:08:40.680 08:42:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:40.680 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:08:40.680 08:42:57 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:08:40.680 08:42:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:40.680 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:08:40.680 08:42:57 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:08:40.680 08:42:57 -- json_config/common.sh@9 -- # local app=target 00:08:40.680 08:42:57 -- json_config/common.sh@10 -- # shift 00:08:40.680 08:42:57 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:40.680 08:42:57 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:40.680 08:42:57 -- json_config/common.sh@15 -- # local app_extra_params= 00:08:40.680 08:42:57 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:40.680 08:42:57 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:40.680 08:42:57 -- json_config/common.sh@22 -- # app_pid["$app"]=1910696 00:08:40.680 08:42:57 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:40.680 Waiting for target to run... 00:08:40.680 08:42:57 -- json_config/common.sh@25 -- # waitforlisten 1910696 /var/tmp/spdk_tgt.sock 00:08:40.680 08:42:57 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:40.680 08:42:57 -- common/autotest_common.sh@817 -- # '[' -z 1910696 ']' 00:08:40.680 08:42:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:40.680 08:42:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:40.680 08:42:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:40.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:40.680 08:42:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:40.680 08:42:57 -- common/autotest_common.sh@10 -- # set +x 00:08:40.680 [2024-04-26 08:42:57.749740] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:08:40.680 [2024-04-26 08:42:57.749794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910696 ] 00:08:40.680 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.937 [2024-04-26 08:42:58.180159] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.195 [2024-04-26 08:42:58.268259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.453 08:42:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:41.453 08:42:58 -- common/autotest_common.sh@850 -- # return 0 00:08:41.453 08:42:58 -- json_config/common.sh@26 -- # echo '' 00:08:41.453 00:08:41.453 08:42:58 -- json_config/json_config.sh@269 -- # create_accel_config 00:08:41.453 08:42:58 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:08:41.453 08:42:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:41.453 08:42:58 -- common/autotest_common.sh@10 -- # set +x 00:08:41.453 08:42:58 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:08:41.453 08:42:58 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:08:41.453 08:42:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:41.453 08:42:58 -- common/autotest_common.sh@10 -- # set +x 00:08:41.453 08:42:58 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:41.453 08:42:58 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:08:41.453 08:42:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:44.732 08:43:01 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:08:44.732 08:43:01 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:44.732 08:43:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:44.732 08:43:01 -- common/autotest_common.sh@10 -- # set +x 00:08:44.732 08:43:01 -- json_config/json_config.sh@45 -- # local ret=0 00:08:44.732 08:43:01 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:44.732 08:43:01 -- json_config/json_config.sh@46 -- # local enabled_types 00:08:44.732 08:43:01 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:08:44.732 08:43:01 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:08:44.732 08:43:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:44.732 08:43:01 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:08:44.732 08:43:01 -- json_config/json_config.sh@48 -- # local get_types 00:08:44.732 08:43:01 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:44.732 08:43:01 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:08:44.732 08:43:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:44.732 08:43:01 -- common/autotest_common.sh@10 -- # set +x 00:08:44.732 08:43:01 -- json_config/json_config.sh@55 -- # return 0 00:08:44.732 08:43:01 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:08:44.732 08:43:01 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:08:44.732 08:43:01 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:08:44.732 08:43:01 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:08:44.732 08:43:01 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:08:44.733 08:43:01 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:08:44.733 08:43:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:44.733 08:43:01 -- common/autotest_common.sh@10 -- # set +x 00:08:44.733 08:43:01 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:44.733 08:43:01 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:08:44.733 08:43:01 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:08:44.733 08:43:01 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:44.733 08:43:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:44.989 MallocForNvmf0 00:08:44.989 08:43:02 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:44.989 08:43:02 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:44.989 MallocForNvmf1 00:08:44.989 08:43:02 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:44.989 08:43:02 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:45.246 [2024-04-26 08:43:02.346768] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.246 08:43:02 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.246 08:43:02 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.503 08:43:02 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:45.503 08:43:02 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:45.503 08:43:02 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:45.503 08:43:02 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:45.770 08:43:02 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:45.770 08:43:02 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:45.770 [2024-04-26 08:43:03.008863] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:46.050 08:43:03 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:08:46.050 08:43:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:46.050 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:08:46.050 08:43:03 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:08:46.050 08:43:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:46.050 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:08:46.050 08:43:03 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:08:46.050 08:43:03 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:46.050 08:43:03 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:46.050 MallocBdevForConfigChangeCheck 00:08:46.050 08:43:03 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:08:46.050 08:43:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:46.050 08:43:03 -- common/autotest_common.sh@10 -- # set +x 00:08:46.307 08:43:03 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:08:46.307 08:43:03 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:46.565 08:43:03 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:08:46.565 INFO: shutting down applications... 00:08:46.565 08:43:03 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:08:46.565 08:43:03 -- json_config/json_config.sh@368 -- # json_config_clear target 00:08:46.565 08:43:03 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:08:46.565 08:43:03 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:49.093 Calling clear_iscsi_subsystem 00:08:49.093 Calling clear_nvmf_subsystem 00:08:49.093 Calling clear_nbd_subsystem 00:08:49.093 Calling clear_ublk_subsystem 00:08:49.093 Calling clear_vhost_blk_subsystem 00:08:49.093 Calling clear_vhost_scsi_subsystem 00:08:49.093 Calling clear_bdev_subsystem 00:08:49.093 08:43:05 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:49.093 08:43:05 -- json_config/json_config.sh@343 -- # count=100 00:08:49.093 08:43:05 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:08:49.093 08:43:05 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:49.093 08:43:05 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:49.093 08:43:05 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:49.093 08:43:06 -- json_config/json_config.sh@345 -- # break 00:08:49.093 08:43:06 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:08:49.093 08:43:06 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:08:49.093 08:43:06 -- json_config/common.sh@31 -- # local app=target 00:08:49.093 08:43:06 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:49.093 08:43:06 -- json_config/common.sh@35 -- # [[ -n 1910696 ]] 00:08:49.093 08:43:06 -- json_config/common.sh@38 -- # kill -SIGINT 1910696 00:08:49.093 08:43:06 -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:49.093 08:43:06 -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:49.093 08:43:06 -- json_config/common.sh@41 -- # kill -0 1910696 00:08:49.093 08:43:06 -- json_config/common.sh@45 -- # sleep 0.5 00:08:49.659 08:43:06 -- json_config/common.sh@40 -- # (( i++ )) 00:08:49.659 08:43:06 -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:49.659 08:43:06 -- json_config/common.sh@41 -- # kill -0 1910696 00:08:49.659 08:43:06 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:49.659 08:43:06 -- json_config/common.sh@43 -- # break 00:08:49.659 08:43:06 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:49.659 08:43:06 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:49.659 SPDK target shutdown done 00:08:49.659 08:43:06 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:08:49.659 INFO: relaunching applications... 00:08:49.659 08:43:06 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:49.659 08:43:06 -- json_config/common.sh@9 -- # local app=target 00:08:49.659 08:43:06 -- json_config/common.sh@10 -- # shift 00:08:49.659 08:43:06 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:49.659 08:43:06 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:49.659 08:43:06 -- json_config/common.sh@15 -- # local app_extra_params= 00:08:49.659 08:43:06 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:49.659 08:43:06 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:49.659 08:43:06 -- json_config/common.sh@22 -- # app_pid["$app"]=1912306 00:08:49.659 08:43:06 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:49.659 Waiting for target to run... 00:08:49.659 08:43:06 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:49.659 08:43:06 -- json_config/common.sh@25 -- # waitforlisten 1912306 /var/tmp/spdk_tgt.sock 00:08:49.659 08:43:06 -- common/autotest_common.sh@817 -- # '[' -z 1912306 ']' 00:08:49.659 08:43:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:49.659 08:43:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:49.659 08:43:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:49.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:49.659 08:43:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:49.659 08:43:06 -- common/autotest_common.sh@10 -- # set +x 00:08:49.659 [2024-04-26 08:43:06.672790] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:08:49.659 [2024-04-26 08:43:06.672841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912306 ] 00:08:49.659 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.916 [2024-04-26 08:43:07.098877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.174 [2024-04-26 08:43:07.186008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.455 [2024-04-26 08:43:10.201316] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.455 [2024-04-26 08:43:10.233654] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:53.713 08:43:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:53.713 08:43:10 -- common/autotest_common.sh@850 -- # return 0 00:08:53.713 08:43:10 -- json_config/common.sh@26 -- # echo '' 00:08:53.713 00:08:53.713 08:43:10 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:08:53.713 08:43:10 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:53.713 INFO: Checking if target configuration is the same... 00:08:53.713 08:43:10 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:08:53.713 08:43:10 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:53.713 08:43:10 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:53.713 + '[' 2 -ne 2 ']' 00:08:53.713 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:53.713 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:53.713 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:53.713 +++ basename /dev/fd/62 00:08:53.713 ++ mktemp /tmp/62.XXX 00:08:53.713 + tmp_file_1=/tmp/62.CgE 00:08:53.713 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:53.713 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:53.713 + tmp_file_2=/tmp/spdk_tgt_config.json.RXK 00:08:53.713 + ret=0 00:08:53.713 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:53.971 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:53.971 + diff -u /tmp/62.CgE /tmp/spdk_tgt_config.json.RXK 00:08:53.971 + echo 'INFO: JSON config files are the same' 00:08:53.971 INFO: JSON config files are the same 00:08:53.971 + rm /tmp/62.CgE /tmp/spdk_tgt_config.json.RXK 00:08:53.971 + exit 0 00:08:53.971 08:43:11 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:08:53.972 08:43:11 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:53.972 INFO: changing configuration and checking if this can be detected... 00:08:53.972 08:43:11 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:53.972 08:43:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:54.229 08:43:11 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:08:54.229 08:43:11 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:54.229 08:43:11 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:54.229 + '[' 2 -ne 2 ']' 00:08:54.229 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:54.229 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:54.229 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:54.229 +++ basename /dev/fd/62 00:08:54.229 ++ mktemp /tmp/62.XXX 00:08:54.229 + tmp_file_1=/tmp/62.m2h 00:08:54.229 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:54.229 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:54.229 + tmp_file_2=/tmp/spdk_tgt_config.json.vj2 00:08:54.229 + ret=0 00:08:54.229 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:54.487 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:54.487 + diff -u /tmp/62.m2h /tmp/spdk_tgt_config.json.vj2 00:08:54.487 + ret=1 00:08:54.487 + echo '=== Start of file: /tmp/62.m2h ===' 00:08:54.487 + cat /tmp/62.m2h 00:08:54.487 + echo '=== End of file: /tmp/62.m2h ===' 00:08:54.487 + echo '' 00:08:54.487 + echo '=== Start of file: /tmp/spdk_tgt_config.json.vj2 ===' 00:08:54.487 + cat /tmp/spdk_tgt_config.json.vj2 00:08:54.487 + echo '=== End of file: /tmp/spdk_tgt_config.json.vj2 ===' 00:08:54.487 + echo '' 00:08:54.487 + rm /tmp/62.m2h /tmp/spdk_tgt_config.json.vj2 00:08:54.745 + exit 1 00:08:54.745 08:43:11 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:08:54.745 INFO: configuration change detected. 00:08:54.745 08:43:11 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:08:54.745 08:43:11 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:08:54.745 08:43:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:54.745 08:43:11 -- common/autotest_common.sh@10 -- # set +x 00:08:54.745 08:43:11 -- json_config/json_config.sh@307 -- # local ret=0 00:08:54.745 08:43:11 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:08:54.745 08:43:11 -- json_config/json_config.sh@317 -- # [[ -n 1912306 ]] 00:08:54.745 08:43:11 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:08:54.745 08:43:11 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:08:54.745 08:43:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:54.745 08:43:11 -- common/autotest_common.sh@10 -- # set +x 00:08:54.745 08:43:11 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:08:54.745 08:43:11 -- json_config/json_config.sh@193 -- # uname -s 00:08:54.745 08:43:11 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:08:54.745 08:43:11 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:08:54.745 08:43:11 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:08:54.745 08:43:11 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:08:54.745 08:43:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:54.745 08:43:11 -- common/autotest_common.sh@10 -- # set +x 00:08:54.745 08:43:11 -- json_config/json_config.sh@323 -- # killprocess 1912306 00:08:54.745 08:43:11 -- common/autotest_common.sh@936 -- # '[' -z 1912306 ']' 00:08:54.745 08:43:11 -- common/autotest_common.sh@940 -- # kill -0 1912306 00:08:54.745 08:43:11 -- common/autotest_common.sh@941 -- # uname 00:08:54.745 08:43:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:54.745 08:43:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1912306 00:08:54.745 08:43:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:54.745 08:43:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:54.745 08:43:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1912306' 00:08:54.745 killing process with pid 1912306 00:08:54.745 08:43:11 -- common/autotest_common.sh@955 -- # kill 1912306 00:08:54.745 08:43:11 -- common/autotest_common.sh@960 -- # wait 1912306 00:08:57.281 08:43:13 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:57.281 08:43:13 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:08:57.281 08:43:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:57.281 08:43:13 -- common/autotest_common.sh@10 -- # set +x 00:08:57.281 08:43:14 -- json_config/json_config.sh@328 -- # return 0 00:08:57.281 08:43:14 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:08:57.281 INFO: Success 00:08:57.281 00:08:57.281 real 0m16.469s 00:08:57.281 user 0m16.822s 00:08:57.281 sys 0m2.339s 00:08:57.281 08:43:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:57.281 08:43:14 -- common/autotest_common.sh@10 -- # set +x 00:08:57.281 ************************************ 00:08:57.281 END TEST json_config 00:08:57.281 ************************************ 00:08:57.281 08:43:14 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:57.281 08:43:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:57.281 08:43:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.281 08:43:14 -- common/autotest_common.sh@10 -- # set +x 00:08:57.281 ************************************ 00:08:57.281 START TEST json_config_extra_key 00:08:57.281 ************************************ 00:08:57.282 08:43:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.282 08:43:14 -- nvmf/common.sh@7 -- # uname -s 00:08:57.282 08:43:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.282 08:43:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.282 08:43:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.282 08:43:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.282 08:43:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.282 08:43:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.282 08:43:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.282 08:43:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.282 08:43:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.282 08:43:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.282 08:43:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:57.282 08:43:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:57.282 08:43:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.282 08:43:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.282 08:43:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:57.282 08:43:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.282 08:43:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.282 08:43:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.282 08:43:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.282 08:43:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.282 08:43:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.282 08:43:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.282 08:43:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.282 08:43:14 -- paths/export.sh@5 -- # export PATH 00:08:57.282 08:43:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.282 08:43:14 -- nvmf/common.sh@47 -- # : 0 00:08:57.282 08:43:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:57.282 08:43:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:57.282 08:43:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.282 08:43:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.282 08:43:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.282 08:43:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:57.282 08:43:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:57.282 08:43:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:57.282 INFO: launching applications... 00:08:57.282 08:43:14 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:57.282 08:43:14 -- json_config/common.sh@9 -- # local app=target 00:08:57.282 08:43:14 -- json_config/common.sh@10 -- # shift 00:08:57.282 08:43:14 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:57.282 08:43:14 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:57.282 08:43:14 -- json_config/common.sh@15 -- # local app_extra_params= 00:08:57.282 08:43:14 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:57.282 08:43:14 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:57.282 08:43:14 -- json_config/common.sh@22 -- # app_pid["$app"]=1913807 00:08:57.282 08:43:14 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:57.282 Waiting for target to run... 00:08:57.282 08:43:14 -- json_config/common.sh@25 -- # waitforlisten 1913807 /var/tmp/spdk_tgt.sock 00:08:57.282 08:43:14 -- common/autotest_common.sh@817 -- # '[' -z 1913807 ']' 00:08:57.282 08:43:14 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:57.282 08:43:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:57.282 08:43:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:57.282 08:43:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:57.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:57.282 08:43:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:57.282 08:43:14 -- common/autotest_common.sh@10 -- # set +x 00:08:57.282 [2024-04-26 08:43:14.403106] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:08:57.282 [2024-04-26 08:43:14.403166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913807 ] 00:08:57.282 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.540 [2024-04-26 08:43:14.695254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.540 [2024-04-26 08:43:14.755854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.104 08:43:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:58.104 08:43:15 -- common/autotest_common.sh@850 -- # return 0 00:08:58.104 08:43:15 -- json_config/common.sh@26 -- # echo '' 00:08:58.104 00:08:58.104 08:43:15 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:58.104 INFO: shutting down applications... 00:08:58.104 08:43:15 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:58.104 08:43:15 -- json_config/common.sh@31 -- # local app=target 00:08:58.104 08:43:15 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:58.104 08:43:15 -- json_config/common.sh@35 -- # [[ -n 1913807 ]] 00:08:58.104 08:43:15 -- json_config/common.sh@38 -- # kill -SIGINT 1913807 00:08:58.104 08:43:15 -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:58.104 08:43:15 -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:58.104 08:43:15 -- json_config/common.sh@41 -- # kill -0 1913807 00:08:58.104 08:43:15 -- json_config/common.sh@45 -- # sleep 0.5 00:08:58.676 08:43:15 -- json_config/common.sh@40 -- # (( i++ )) 00:08:58.676 08:43:15 -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:58.676 08:43:15 -- json_config/common.sh@41 -- # kill -0 1913807 00:08:58.676 08:43:15 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:58.676 08:43:15 -- json_config/common.sh@43 -- # break 00:08:58.676 08:43:15 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:58.676 08:43:15 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:58.676 SPDK target shutdown done 00:08:58.676 08:43:15 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:58.676 Success 00:08:58.676 00:08:58.676 real 0m1.463s 00:08:58.676 user 0m1.221s 00:08:58.676 sys 0m0.407s 00:08:58.676 08:43:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:58.676 08:43:15 -- common/autotest_common.sh@10 -- # set +x 00:08:58.676 ************************************ 00:08:58.676 END TEST json_config_extra_key 00:08:58.676 ************************************ 00:08:58.676 08:43:15 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:58.676 08:43:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:58.676 08:43:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.676 08:43:15 -- common/autotest_common.sh@10 -- # set +x 00:08:58.972 ************************************ 00:08:58.972 START TEST alias_rpc 00:08:58.972 ************************************ 00:08:58.972 08:43:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:58.972 * Looking for test storage... 00:08:58.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:58.972 08:43:16 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:58.972 08:43:16 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1914202 00:08:58.972 08:43:16 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1914202 00:08:58.972 08:43:16 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:58.972 08:43:16 -- common/autotest_common.sh@817 -- # '[' -z 1914202 ']' 00:08:58.972 08:43:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.972 08:43:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:58.972 08:43:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.972 08:43:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:58.972 08:43:16 -- common/autotest_common.sh@10 -- # set +x 00:08:58.972 [2024-04-26 08:43:16.087232] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:08:58.972 [2024-04-26 08:43:16.087280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914202 ] 00:08:58.972 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.972 [2024-04-26 08:43:16.155008] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.230 [2024-04-26 08:43:16.226647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.795 08:43:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:59.795 08:43:16 -- common/autotest_common.sh@850 -- # return 0 00:08:59.795 08:43:16 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:00.052 08:43:17 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1914202 00:09:00.052 08:43:17 -- common/autotest_common.sh@936 -- # '[' -z 1914202 ']' 00:09:00.052 08:43:17 -- common/autotest_common.sh@940 -- # kill -0 1914202 00:09:00.052 08:43:17 -- common/autotest_common.sh@941 -- # uname 00:09:00.052 08:43:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:00.052 08:43:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1914202 00:09:00.052 08:43:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:00.052 08:43:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:00.052 08:43:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1914202' 00:09:00.052 killing process with pid 1914202 00:09:00.052 08:43:17 -- common/autotest_common.sh@955 -- # kill 1914202 00:09:00.052 08:43:17 -- common/autotest_common.sh@960 -- # wait 1914202 00:09:00.310 00:09:00.310 real 0m1.522s 00:09:00.310 user 0m1.621s 00:09:00.310 sys 0m0.438s 00:09:00.310 08:43:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:00.310 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:09:00.310 ************************************ 00:09:00.310 END TEST alias_rpc 00:09:00.310 ************************************ 00:09:00.310 08:43:17 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:09:00.310 08:43:17 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:00.310 08:43:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.310 08:43:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.310 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:09:00.568 ************************************ 00:09:00.568 START TEST spdkcli_tcp 00:09:00.568 ************************************ 00:09:00.568 08:43:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:00.568 * Looking for test storage... 00:09:00.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:00.568 08:43:17 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:00.568 08:43:17 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:00.568 08:43:17 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:00.568 08:43:17 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:00.568 08:43:17 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:00.568 08:43:17 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:00.568 08:43:17 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:00.568 08:43:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:00.568 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:09:00.568 08:43:17 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:00.568 08:43:17 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1914538 00:09:00.568 08:43:17 -- spdkcli/tcp.sh@27 -- # waitforlisten 1914538 00:09:00.568 08:43:17 -- common/autotest_common.sh@817 -- # '[' -z 1914538 ']' 00:09:00.568 08:43:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.568 08:43:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:00.568 08:43:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.568 08:43:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:00.568 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:09:00.568 [2024-04-26 08:43:17.805723] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:00.568 [2024-04-26 08:43:17.805774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914538 ] 00:09:00.826 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.826 [2024-04-26 08:43:17.874911] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:00.826 [2024-04-26 08:43:17.946860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.826 [2024-04-26 08:43:17.946863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.391 08:43:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:01.391 08:43:18 -- common/autotest_common.sh@850 -- # return 0 00:09:01.391 08:43:18 -- spdkcli/tcp.sh@31 -- # socat_pid=1914597 00:09:01.391 08:43:18 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:01.391 08:43:18 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:01.649 [ 00:09:01.649 "bdev_malloc_delete", 00:09:01.649 "bdev_malloc_create", 00:09:01.649 "bdev_null_resize", 00:09:01.649 "bdev_null_delete", 00:09:01.649 "bdev_null_create", 00:09:01.649 "bdev_nvme_cuse_unregister", 00:09:01.649 "bdev_nvme_cuse_register", 00:09:01.649 "bdev_opal_new_user", 00:09:01.649 "bdev_opal_set_lock_state", 00:09:01.649 "bdev_opal_delete", 00:09:01.649 "bdev_opal_get_info", 00:09:01.649 "bdev_opal_create", 00:09:01.649 "bdev_nvme_opal_revert", 00:09:01.649 "bdev_nvme_opal_init", 00:09:01.649 "bdev_nvme_send_cmd", 00:09:01.649 "bdev_nvme_get_path_iostat", 00:09:01.649 "bdev_nvme_get_mdns_discovery_info", 00:09:01.649 "bdev_nvme_stop_mdns_discovery", 00:09:01.649 "bdev_nvme_start_mdns_discovery", 00:09:01.649 "bdev_nvme_set_multipath_policy", 00:09:01.649 "bdev_nvme_set_preferred_path", 00:09:01.649 "bdev_nvme_get_io_paths", 00:09:01.649 "bdev_nvme_remove_error_injection", 00:09:01.649 "bdev_nvme_add_error_injection", 00:09:01.649 "bdev_nvme_get_discovery_info", 00:09:01.649 "bdev_nvme_stop_discovery", 00:09:01.649 "bdev_nvme_start_discovery", 00:09:01.649 "bdev_nvme_get_controller_health_info", 00:09:01.649 "bdev_nvme_disable_controller", 00:09:01.649 "bdev_nvme_enable_controller", 00:09:01.649 "bdev_nvme_reset_controller", 00:09:01.649 "bdev_nvme_get_transport_statistics", 00:09:01.649 "bdev_nvme_apply_firmware", 00:09:01.649 "bdev_nvme_detach_controller", 00:09:01.649 "bdev_nvme_get_controllers", 00:09:01.649 "bdev_nvme_attach_controller", 00:09:01.649 "bdev_nvme_set_hotplug", 00:09:01.649 "bdev_nvme_set_options", 00:09:01.649 "bdev_passthru_delete", 00:09:01.649 "bdev_passthru_create", 00:09:01.649 "bdev_lvol_grow_lvstore", 00:09:01.649 "bdev_lvol_get_lvols", 00:09:01.649 "bdev_lvol_get_lvstores", 00:09:01.649 "bdev_lvol_delete", 00:09:01.649 "bdev_lvol_set_read_only", 00:09:01.649 "bdev_lvol_resize", 00:09:01.649 "bdev_lvol_decouple_parent", 00:09:01.649 "bdev_lvol_inflate", 00:09:01.649 "bdev_lvol_rename", 00:09:01.649 "bdev_lvol_clone_bdev", 00:09:01.649 "bdev_lvol_clone", 00:09:01.649 "bdev_lvol_snapshot", 00:09:01.649 "bdev_lvol_create", 00:09:01.649 "bdev_lvol_delete_lvstore", 00:09:01.649 "bdev_lvol_rename_lvstore", 00:09:01.649 "bdev_lvol_create_lvstore", 00:09:01.649 "bdev_raid_set_options", 00:09:01.649 "bdev_raid_remove_base_bdev", 00:09:01.649 "bdev_raid_add_base_bdev", 00:09:01.649 "bdev_raid_delete", 00:09:01.649 "bdev_raid_create", 00:09:01.649 "bdev_raid_get_bdevs", 00:09:01.649 "bdev_error_inject_error", 00:09:01.649 "bdev_error_delete", 00:09:01.649 "bdev_error_create", 00:09:01.649 "bdev_split_delete", 00:09:01.649 "bdev_split_create", 00:09:01.649 "bdev_delay_delete", 00:09:01.649 "bdev_delay_create", 00:09:01.649 "bdev_delay_update_latency", 00:09:01.649 "bdev_zone_block_delete", 00:09:01.649 "bdev_zone_block_create", 00:09:01.649 "blobfs_create", 00:09:01.649 "blobfs_detect", 00:09:01.649 "blobfs_set_cache_size", 00:09:01.649 "bdev_aio_delete", 00:09:01.649 "bdev_aio_rescan", 00:09:01.649 "bdev_aio_create", 00:09:01.649 "bdev_ftl_set_property", 00:09:01.649 "bdev_ftl_get_properties", 00:09:01.649 "bdev_ftl_get_stats", 00:09:01.649 "bdev_ftl_unmap", 00:09:01.649 "bdev_ftl_unload", 00:09:01.649 "bdev_ftl_delete", 00:09:01.649 "bdev_ftl_load", 00:09:01.649 "bdev_ftl_create", 00:09:01.649 "bdev_virtio_attach_controller", 00:09:01.649 "bdev_virtio_scsi_get_devices", 00:09:01.649 "bdev_virtio_detach_controller", 00:09:01.649 "bdev_virtio_blk_set_hotplug", 00:09:01.649 "bdev_iscsi_delete", 00:09:01.649 "bdev_iscsi_create", 00:09:01.649 "bdev_iscsi_set_options", 00:09:01.649 "accel_error_inject_error", 00:09:01.649 "ioat_scan_accel_module", 00:09:01.649 "dsa_scan_accel_module", 00:09:01.649 "iaa_scan_accel_module", 00:09:01.649 "vfu_virtio_create_scsi_endpoint", 00:09:01.649 "vfu_virtio_scsi_remove_target", 00:09:01.649 "vfu_virtio_scsi_add_target", 00:09:01.649 "vfu_virtio_create_blk_endpoint", 00:09:01.649 "vfu_virtio_delete_endpoint", 00:09:01.649 "keyring_file_remove_key", 00:09:01.649 "keyring_file_add_key", 00:09:01.649 "iscsi_get_histogram", 00:09:01.649 "iscsi_enable_histogram", 00:09:01.649 "iscsi_set_options", 00:09:01.649 "iscsi_get_auth_groups", 00:09:01.649 "iscsi_auth_group_remove_secret", 00:09:01.649 "iscsi_auth_group_add_secret", 00:09:01.649 "iscsi_delete_auth_group", 00:09:01.649 "iscsi_create_auth_group", 00:09:01.649 "iscsi_set_discovery_auth", 00:09:01.649 "iscsi_get_options", 00:09:01.649 "iscsi_target_node_request_logout", 00:09:01.649 "iscsi_target_node_set_redirect", 00:09:01.649 "iscsi_target_node_set_auth", 00:09:01.649 "iscsi_target_node_add_lun", 00:09:01.649 "iscsi_get_stats", 00:09:01.649 "iscsi_get_connections", 00:09:01.649 "iscsi_portal_group_set_auth", 00:09:01.649 "iscsi_start_portal_group", 00:09:01.649 "iscsi_delete_portal_group", 00:09:01.649 "iscsi_create_portal_group", 00:09:01.649 "iscsi_get_portal_groups", 00:09:01.649 "iscsi_delete_target_node", 00:09:01.649 "iscsi_target_node_remove_pg_ig_maps", 00:09:01.649 "iscsi_target_node_add_pg_ig_maps", 00:09:01.649 "iscsi_create_target_node", 00:09:01.649 "iscsi_get_target_nodes", 00:09:01.649 "iscsi_delete_initiator_group", 00:09:01.649 "iscsi_initiator_group_remove_initiators", 00:09:01.649 "iscsi_initiator_group_add_initiators", 00:09:01.649 "iscsi_create_initiator_group", 00:09:01.649 "iscsi_get_initiator_groups", 00:09:01.649 "nvmf_set_crdt", 00:09:01.649 "nvmf_set_config", 00:09:01.649 "nvmf_set_max_subsystems", 00:09:01.649 "nvmf_subsystem_get_listeners", 00:09:01.649 "nvmf_subsystem_get_qpairs", 00:09:01.649 "nvmf_subsystem_get_controllers", 00:09:01.649 "nvmf_get_stats", 00:09:01.649 "nvmf_get_transports", 00:09:01.649 "nvmf_create_transport", 00:09:01.649 "nvmf_get_targets", 00:09:01.649 "nvmf_delete_target", 00:09:01.649 "nvmf_create_target", 00:09:01.649 "nvmf_subsystem_allow_any_host", 00:09:01.649 "nvmf_subsystem_remove_host", 00:09:01.649 "nvmf_subsystem_add_host", 00:09:01.649 "nvmf_ns_remove_host", 00:09:01.649 "nvmf_ns_add_host", 00:09:01.649 "nvmf_subsystem_remove_ns", 00:09:01.649 "nvmf_subsystem_add_ns", 00:09:01.649 "nvmf_subsystem_listener_set_ana_state", 00:09:01.649 "nvmf_discovery_get_referrals", 00:09:01.649 "nvmf_discovery_remove_referral", 00:09:01.649 "nvmf_discovery_add_referral", 00:09:01.649 "nvmf_subsystem_remove_listener", 00:09:01.649 "nvmf_subsystem_add_listener", 00:09:01.649 "nvmf_delete_subsystem", 00:09:01.649 "nvmf_create_subsystem", 00:09:01.649 "nvmf_get_subsystems", 00:09:01.649 "env_dpdk_get_mem_stats", 00:09:01.649 "nbd_get_disks", 00:09:01.649 "nbd_stop_disk", 00:09:01.649 "nbd_start_disk", 00:09:01.649 "ublk_recover_disk", 00:09:01.649 "ublk_get_disks", 00:09:01.649 "ublk_stop_disk", 00:09:01.649 "ublk_start_disk", 00:09:01.649 "ublk_destroy_target", 00:09:01.649 "ublk_create_target", 00:09:01.649 "virtio_blk_create_transport", 00:09:01.649 "virtio_blk_get_transports", 00:09:01.649 "vhost_controller_set_coalescing", 00:09:01.649 "vhost_get_controllers", 00:09:01.649 "vhost_delete_controller", 00:09:01.649 "vhost_create_blk_controller", 00:09:01.649 "vhost_scsi_controller_remove_target", 00:09:01.649 "vhost_scsi_controller_add_target", 00:09:01.649 "vhost_start_scsi_controller", 00:09:01.649 "vhost_create_scsi_controller", 00:09:01.649 "thread_set_cpumask", 00:09:01.649 "framework_get_scheduler", 00:09:01.649 "framework_set_scheduler", 00:09:01.649 "framework_get_reactors", 00:09:01.649 "thread_get_io_channels", 00:09:01.649 "thread_get_pollers", 00:09:01.649 "thread_get_stats", 00:09:01.649 "framework_monitor_context_switch", 00:09:01.649 "spdk_kill_instance", 00:09:01.649 "log_enable_timestamps", 00:09:01.649 "log_get_flags", 00:09:01.649 "log_clear_flag", 00:09:01.649 "log_set_flag", 00:09:01.649 "log_get_level", 00:09:01.649 "log_set_level", 00:09:01.649 "log_get_print_level", 00:09:01.649 "log_set_print_level", 00:09:01.649 "framework_enable_cpumask_locks", 00:09:01.649 "framework_disable_cpumask_locks", 00:09:01.649 "framework_wait_init", 00:09:01.649 "framework_start_init", 00:09:01.649 "scsi_get_devices", 00:09:01.649 "bdev_get_histogram", 00:09:01.649 "bdev_enable_histogram", 00:09:01.649 "bdev_set_qos_limit", 00:09:01.649 "bdev_set_qd_sampling_period", 00:09:01.649 "bdev_get_bdevs", 00:09:01.649 "bdev_reset_iostat", 00:09:01.649 "bdev_get_iostat", 00:09:01.649 "bdev_examine", 00:09:01.649 "bdev_wait_for_examine", 00:09:01.649 "bdev_set_options", 00:09:01.649 "notify_get_notifications", 00:09:01.649 "notify_get_types", 00:09:01.649 "accel_get_stats", 00:09:01.649 "accel_set_options", 00:09:01.649 "accel_set_driver", 00:09:01.649 "accel_crypto_key_destroy", 00:09:01.649 "accel_crypto_keys_get", 00:09:01.650 "accel_crypto_key_create", 00:09:01.650 "accel_assign_opc", 00:09:01.650 "accel_get_module_info", 00:09:01.650 "accel_get_opc_assignments", 00:09:01.650 "vmd_rescan", 00:09:01.650 "vmd_remove_device", 00:09:01.650 "vmd_enable", 00:09:01.650 "sock_get_default_impl", 00:09:01.650 "sock_set_default_impl", 00:09:01.650 "sock_impl_set_options", 00:09:01.650 "sock_impl_get_options", 00:09:01.650 "iobuf_get_stats", 00:09:01.650 "iobuf_set_options", 00:09:01.650 "keyring_get_keys", 00:09:01.650 "framework_get_pci_devices", 00:09:01.650 "framework_get_config", 00:09:01.650 "framework_get_subsystems", 00:09:01.650 "vfu_tgt_set_base_path", 00:09:01.650 "trace_get_info", 00:09:01.650 "trace_get_tpoint_group_mask", 00:09:01.650 "trace_disable_tpoint_group", 00:09:01.650 "trace_enable_tpoint_group", 00:09:01.650 "trace_clear_tpoint_mask", 00:09:01.650 "trace_set_tpoint_mask", 00:09:01.650 "spdk_get_version", 00:09:01.650 "rpc_get_methods" 00:09:01.650 ] 00:09:01.650 08:43:18 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:01.650 08:43:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:01.650 08:43:18 -- common/autotest_common.sh@10 -- # set +x 00:09:01.650 08:43:18 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:01.650 08:43:18 -- spdkcli/tcp.sh@38 -- # killprocess 1914538 00:09:01.650 08:43:18 -- common/autotest_common.sh@936 -- # '[' -z 1914538 ']' 00:09:01.650 08:43:18 -- common/autotest_common.sh@940 -- # kill -0 1914538 00:09:01.650 08:43:18 -- common/autotest_common.sh@941 -- # uname 00:09:01.650 08:43:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:01.650 08:43:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1914538 00:09:01.650 08:43:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:01.650 08:43:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:01.650 08:43:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1914538' 00:09:01.650 killing process with pid 1914538 00:09:01.650 08:43:18 -- common/autotest_common.sh@955 -- # kill 1914538 00:09:01.650 08:43:18 -- common/autotest_common.sh@960 -- # wait 1914538 00:09:02.216 00:09:02.216 real 0m1.570s 00:09:02.216 user 0m2.852s 00:09:02.216 sys 0m0.488s 00:09:02.216 08:43:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:02.216 08:43:19 -- common/autotest_common.sh@10 -- # set +x 00:09:02.216 ************************************ 00:09:02.216 END TEST spdkcli_tcp 00:09:02.216 ************************************ 00:09:02.216 08:43:19 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:02.216 08:43:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:02.216 08:43:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.216 08:43:19 -- common/autotest_common.sh@10 -- # set +x 00:09:02.216 ************************************ 00:09:02.216 START TEST dpdk_mem_utility 00:09:02.216 ************************************ 00:09:02.216 08:43:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:02.474 * Looking for test storage... 00:09:02.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:02.474 08:43:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:02.474 08:43:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1914889 00:09:02.474 08:43:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1914889 00:09:02.474 08:43:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:02.474 08:43:19 -- common/autotest_common.sh@817 -- # '[' -z 1914889 ']' 00:09:02.474 08:43:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.474 08:43:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:02.474 08:43:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.474 08:43:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:02.474 08:43:19 -- common/autotest_common.sh@10 -- # set +x 00:09:02.475 [2024-04-26 08:43:19.549995] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:02.475 [2024-04-26 08:43:19.550041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914889 ] 00:09:02.475 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.475 [2024-04-26 08:43:19.618683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.475 [2024-04-26 08:43:19.685880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.133 08:43:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:03.133 08:43:20 -- common/autotest_common.sh@850 -- # return 0 00:09:03.133 08:43:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:03.133 08:43:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:03.133 08:43:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.133 08:43:20 -- common/autotest_common.sh@10 -- # set +x 00:09:03.133 { 00:09:03.133 "filename": "/tmp/spdk_mem_dump.txt" 00:09:03.133 } 00:09:03.133 08:43:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.133 08:43:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:03.393 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:03.393 1 heaps totaling size 814.000000 MiB 00:09:03.393 size: 814.000000 MiB heap id: 0 00:09:03.393 end heaps---------- 00:09:03.393 8 mempools totaling size 598.116089 MiB 00:09:03.393 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:03.393 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:03.393 size: 84.521057 MiB name: bdev_io_1914889 00:09:03.393 size: 51.011292 MiB name: evtpool_1914889 00:09:03.393 size: 50.003479 MiB name: msgpool_1914889 00:09:03.393 size: 21.763794 MiB name: PDU_Pool 00:09:03.393 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:03.393 size: 0.026123 MiB name: Session_Pool 00:09:03.393 end mempools------- 00:09:03.393 6 memzones totaling size 4.142822 MiB 00:09:03.393 size: 1.000366 MiB name: RG_ring_0_1914889 00:09:03.393 size: 1.000366 MiB name: RG_ring_1_1914889 00:09:03.393 size: 1.000366 MiB name: RG_ring_4_1914889 00:09:03.393 size: 1.000366 MiB name: RG_ring_5_1914889 00:09:03.393 size: 0.125366 MiB name: RG_ring_2_1914889 00:09:03.393 size: 0.015991 MiB name: RG_ring_3_1914889 00:09:03.393 end memzones------- 00:09:03.393 08:43:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:03.393 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:09:03.393 list of free elements. size: 12.519348 MiB 00:09:03.393 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:03.393 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:03.393 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:03.393 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:03.393 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:03.393 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:03.393 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:03.393 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:03.393 element at address: 0x200000200000 with size: 0.841614 MiB 00:09:03.393 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:09:03.393 element at address: 0x20000b200000 with size: 0.490723 MiB 00:09:03.393 element at address: 0x200000800000 with size: 0.487793 MiB 00:09:03.393 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:03.393 element at address: 0x200027e00000 with size: 0.410034 MiB 00:09:03.393 element at address: 0x200003a00000 with size: 0.355530 MiB 00:09:03.393 list of standard malloc elements. size: 199.218079 MiB 00:09:03.393 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:03.393 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:03.393 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:03.393 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:03.393 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:03.393 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:03.393 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:03.393 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:03.393 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:03.393 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:03.393 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:03.393 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:03.393 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:03.393 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:03.393 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:03.393 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:03.393 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:03.393 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200027e69040 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:03.393 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:03.393 list of memzone associated elements. size: 602.262573 MiB 00:09:03.393 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:03.393 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:03.393 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:03.393 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:03.393 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:03.393 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1914889_0 00:09:03.393 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:03.393 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1914889_0 00:09:03.393 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:03.393 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1914889_0 00:09:03.393 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:03.393 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:03.393 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:03.393 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:03.393 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:03.393 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1914889 00:09:03.393 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:03.393 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1914889 00:09:03.393 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:03.393 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1914889 00:09:03.394 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:03.394 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:03.394 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:03.394 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:03.394 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:03.394 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:03.394 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:03.394 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:03.394 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:03.394 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1914889 00:09:03.394 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:03.394 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1914889 00:09:03.394 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:03.394 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1914889 00:09:03.394 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:03.394 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1914889 00:09:03.394 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:03.394 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1914889 00:09:03.394 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:03.394 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:03.394 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:03.394 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:03.394 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:03.394 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:03.394 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:03.394 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1914889 00:09:03.394 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:03.394 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:03.394 element at address: 0x200027e69100 with size: 0.023743 MiB 00:09:03.394 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:03.394 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:03.394 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1914889 00:09:03.394 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:09:03.394 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:03.394 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:09:03.394 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1914889 00:09:03.394 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:03.394 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1914889 00:09:03.394 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:09:03.394 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:03.394 08:43:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:03.394 08:43:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1914889 00:09:03.394 08:43:20 -- common/autotest_common.sh@936 -- # '[' -z 1914889 ']' 00:09:03.394 08:43:20 -- common/autotest_common.sh@940 -- # kill -0 1914889 00:09:03.394 08:43:20 -- common/autotest_common.sh@941 -- # uname 00:09:03.394 08:43:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:03.394 08:43:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1914889 00:09:03.394 08:43:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:03.394 08:43:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:03.394 08:43:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1914889' 00:09:03.394 killing process with pid 1914889 00:09:03.394 08:43:20 -- common/autotest_common.sh@955 -- # kill 1914889 00:09:03.394 08:43:20 -- common/autotest_common.sh@960 -- # wait 1914889 00:09:03.653 00:09:03.653 real 0m1.407s 00:09:03.653 user 0m1.448s 00:09:03.653 sys 0m0.416s 00:09:03.653 08:43:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:03.653 08:43:20 -- common/autotest_common.sh@10 -- # set +x 00:09:03.653 ************************************ 00:09:03.653 END TEST dpdk_mem_utility 00:09:03.653 ************************************ 00:09:03.653 08:43:20 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:03.653 08:43:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:03.653 08:43:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.653 08:43:20 -- common/autotest_common.sh@10 -- # set +x 00:09:03.911 ************************************ 00:09:03.911 START TEST event 00:09:03.911 ************************************ 00:09:03.911 08:43:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:03.911 * Looking for test storage... 00:09:03.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:03.911 08:43:21 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:03.911 08:43:21 -- bdev/nbd_common.sh@6 -- # set -e 00:09:03.911 08:43:21 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:03.911 08:43:21 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:03.911 08:43:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.911 08:43:21 -- common/autotest_common.sh@10 -- # set +x 00:09:04.170 ************************************ 00:09:04.170 START TEST event_perf 00:09:04.170 ************************************ 00:09:04.170 08:43:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:04.170 Running I/O for 1 seconds...[2024-04-26 08:43:21.255136] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:04.170 [2024-04-26 08:43:21.255215] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915229 ] 00:09:04.170 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.170 [2024-04-26 08:43:21.325503] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.170 [2024-04-26 08:43:21.394044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.170 [2024-04-26 08:43:21.394142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.170 [2024-04-26 08:43:21.394230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.170 [2024-04-26 08:43:21.394232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.545 Running I/O for 1 seconds... 00:09:05.545 lcore 0: 212541 00:09:05.545 lcore 1: 212540 00:09:05.546 lcore 2: 212540 00:09:05.546 lcore 3: 212540 00:09:05.546 done. 00:09:05.546 00:09:05.546 real 0m1.243s 00:09:05.546 user 0m4.151s 00:09:05.546 sys 0m0.088s 00:09:05.546 08:43:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:05.546 08:43:22 -- common/autotest_common.sh@10 -- # set +x 00:09:05.546 ************************************ 00:09:05.546 END TEST event_perf 00:09:05.546 ************************************ 00:09:05.546 08:43:22 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:05.546 08:43:22 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:05.546 08:43:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.546 08:43:22 -- common/autotest_common.sh@10 -- # set +x 00:09:05.546 ************************************ 00:09:05.546 START TEST event_reactor 00:09:05.546 ************************************ 00:09:05.546 08:43:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:05.546 [2024-04-26 08:43:22.669079] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:05.546 [2024-04-26 08:43:22.669156] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915519 ] 00:09:05.546 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.546 [2024-04-26 08:43:22.741241] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.803 [2024-04-26 08:43:22.812616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.738 test_start 00:09:06.738 oneshot 00:09:06.738 tick 100 00:09:06.738 tick 100 00:09:06.738 tick 250 00:09:06.738 tick 100 00:09:06.738 tick 100 00:09:06.738 tick 250 00:09:06.738 tick 100 00:09:06.738 tick 500 00:09:06.738 tick 100 00:09:06.738 tick 100 00:09:06.738 tick 250 00:09:06.738 tick 100 00:09:06.738 tick 100 00:09:06.738 test_end 00:09:06.738 00:09:06.738 real 0m1.242s 00:09:06.738 user 0m1.159s 00:09:06.738 sys 0m0.078s 00:09:06.738 08:43:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:06.738 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:09:06.738 ************************************ 00:09:06.738 END TEST event_reactor 00:09:06.738 ************************************ 00:09:06.738 08:43:23 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:06.738 08:43:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:06.738 08:43:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:06.738 08:43:23 -- common/autotest_common.sh@10 -- # set +x 00:09:06.997 ************************************ 00:09:06.997 START TEST event_reactor_perf 00:09:06.997 ************************************ 00:09:06.997 08:43:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:06.997 [2024-04-26 08:43:24.087408] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:06.997 [2024-04-26 08:43:24.087563] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915814 ] 00:09:06.997 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.997 [2024-04-26 08:43:24.167398] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.997 [2024-04-26 08:43:24.233762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.373 test_start 00:09:08.374 test_end 00:09:08.374 Performance: 532437 events per second 00:09:08.374 00:09:08.374 real 0m1.243s 00:09:08.374 user 0m1.145s 00:09:08.374 sys 0m0.093s 00:09:08.374 08:43:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:08.374 08:43:25 -- common/autotest_common.sh@10 -- # set +x 00:09:08.374 ************************************ 00:09:08.374 END TEST event_reactor_perf 00:09:08.374 ************************************ 00:09:08.374 08:43:25 -- event/event.sh@49 -- # uname -s 00:09:08.374 08:43:25 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:08.374 08:43:25 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:08.374 08:43:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:08.374 08:43:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.374 08:43:25 -- common/autotest_common.sh@10 -- # set +x 00:09:08.374 ************************************ 00:09:08.374 START TEST event_scheduler 00:09:08.374 ************************************ 00:09:08.374 08:43:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:08.374 * Looking for test storage... 00:09:08.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:08.374 08:43:25 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:08.374 08:43:25 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1916138 00:09:08.374 08:43:25 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:08.374 08:43:25 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:08.374 08:43:25 -- scheduler/scheduler.sh@37 -- # waitforlisten 1916138 00:09:08.374 08:43:25 -- common/autotest_common.sh@817 -- # '[' -z 1916138 ']' 00:09:08.374 08:43:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.374 08:43:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:08.374 08:43:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.374 08:43:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:08.374 08:43:25 -- common/autotest_common.sh@10 -- # set +x 00:09:08.632 [2024-04-26 08:43:25.658266] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:08.632 [2024-04-26 08:43:25.658317] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1916138 ] 00:09:08.632 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.632 [2024-04-26 08:43:25.724952] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.632 [2024-04-26 08:43:25.794244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.632 [2024-04-26 08:43:25.794331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.632 [2024-04-26 08:43:25.794418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.632 [2024-04-26 08:43:25.794420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.565 08:43:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:09.565 08:43:26 -- common/autotest_common.sh@850 -- # return 0 00:09:09.565 08:43:26 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:09.565 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.565 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 POWER: Env isn't set yet! 00:09:09.565 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:09.565 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:09.565 POWER: Cannot set governor of lcore 0 to userspace 00:09:09.565 POWER: Attempting to initialise PSTAT power management... 00:09:09.565 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:09:09.565 POWER: Initialized successfully for lcore 0 power management 00:09:09.565 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:09:09.565 POWER: Initialized successfully for lcore 1 power management 00:09:09.565 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:09:09.565 POWER: Initialized successfully for lcore 2 power management 00:09:09.565 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:09:09.565 POWER: Initialized successfully for lcore 3 power management 00:09:09.565 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.565 08:43:26 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:09.565 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.565 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 [2024-04-26 08:43:26.589938] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:09.565 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.565 08:43:26 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:09.565 08:43:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:09.565 08:43:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.565 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 ************************************ 00:09:09.565 START TEST scheduler_create_thread 00:09:09.565 ************************************ 00:09:09.565 08:43:26 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:09:09.565 08:43:26 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:09.565 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.565 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 2 00:09:09.565 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.565 08:43:26 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:09.565 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.565 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 3 00:09:09.565 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.565 08:43:26 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:09.565 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.565 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 4 00:09:09.565 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.565 08:43:26 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:09.565 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.565 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.565 5 00:09:09.565 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.565 08:43:26 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:09.565 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.565 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.824 6 00:09:09.824 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.824 08:43:26 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:09.824 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.824 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.824 7 00:09:09.824 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.824 08:43:26 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:09.824 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.824 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.824 8 00:09:09.824 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.824 08:43:26 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:09.824 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.824 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.824 9 00:09:09.824 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.824 08:43:26 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:09.824 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.824 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.824 10 00:09:09.824 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.824 08:43:26 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:09.824 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.824 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:09.824 08:43:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:09.824 08:43:26 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:09.824 08:43:26 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:09.824 08:43:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:09.824 08:43:26 -- common/autotest_common.sh@10 -- # set +x 00:09:10.758 08:43:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:10.758 08:43:27 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:10.758 08:43:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:10.758 08:43:27 -- common/autotest_common.sh@10 -- # set +x 00:09:12.130 08:43:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:12.130 08:43:29 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:12.130 08:43:29 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:12.130 08:43:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:12.130 08:43:29 -- common/autotest_common.sh@10 -- # set +x 00:09:13.065 08:43:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:13.065 00:09:13.065 real 0m3.482s 00:09:13.065 user 0m0.021s 00:09:13.065 sys 0m0.009s 00:09:13.065 08:43:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:13.065 08:43:30 -- common/autotest_common.sh@10 -- # set +x 00:09:13.065 ************************************ 00:09:13.065 END TEST scheduler_create_thread 00:09:13.065 ************************************ 00:09:13.065 08:43:30 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:13.065 08:43:30 -- scheduler/scheduler.sh@46 -- # killprocess 1916138 00:09:13.065 08:43:30 -- common/autotest_common.sh@936 -- # '[' -z 1916138 ']' 00:09:13.065 08:43:30 -- common/autotest_common.sh@940 -- # kill -0 1916138 00:09:13.065 08:43:30 -- common/autotest_common.sh@941 -- # uname 00:09:13.065 08:43:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:13.065 08:43:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1916138 00:09:13.323 08:43:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:13.323 08:43:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:13.323 08:43:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1916138' 00:09:13.323 killing process with pid 1916138 00:09:13.323 08:43:30 -- common/autotest_common.sh@955 -- # kill 1916138 00:09:13.323 08:43:30 -- common/autotest_common.sh@960 -- # wait 1916138 00:09:13.581 [2024-04-26 08:43:30.715232] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:13.581 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:09:13.581 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:09:13.581 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:09:13.581 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:09:13.581 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:09:13.581 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:09:13.581 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:09:13.581 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:09:13.839 00:09:13.839 real 0m5.419s 00:09:13.839 user 0m8.887s 00:09:13.839 sys 0m0.508s 00:09:13.839 08:43:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:13.839 08:43:30 -- common/autotest_common.sh@10 -- # set +x 00:09:13.839 ************************************ 00:09:13.839 END TEST event_scheduler 00:09:13.839 ************************************ 00:09:13.839 08:43:30 -- event/event.sh@51 -- # modprobe -n nbd 00:09:13.839 08:43:30 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:13.839 08:43:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:13.839 08:43:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.839 08:43:30 -- common/autotest_common.sh@10 -- # set +x 00:09:14.098 ************************************ 00:09:14.098 START TEST app_repeat 00:09:14.098 ************************************ 00:09:14.098 08:43:31 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:09:14.098 08:43:31 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.098 08:43:31 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:14.098 08:43:31 -- event/event.sh@13 -- # local nbd_list 00:09:14.098 08:43:31 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:14.098 08:43:31 -- event/event.sh@14 -- # local bdev_list 00:09:14.098 08:43:31 -- event/event.sh@15 -- # local repeat_times=4 00:09:14.098 08:43:31 -- event/event.sh@17 -- # modprobe nbd 00:09:14.098 08:43:31 -- event/event.sh@19 -- # repeat_pid=1917192 00:09:14.098 08:43:31 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:14.098 08:43:31 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:14.098 08:43:31 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1917192' 00:09:14.098 Process app_repeat pid: 1917192 00:09:14.098 08:43:31 -- event/event.sh@23 -- # for i in {0..2} 00:09:14.098 08:43:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:14.098 spdk_app_start Round 0 00:09:14.098 08:43:31 -- event/event.sh@25 -- # waitforlisten 1917192 /var/tmp/spdk-nbd.sock 00:09:14.098 08:43:31 -- common/autotest_common.sh@817 -- # '[' -z 1917192 ']' 00:09:14.098 08:43:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:14.098 08:43:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:14.098 08:43:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:14.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:14.098 08:43:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:14.098 08:43:31 -- common/autotest_common.sh@10 -- # set +x 00:09:14.098 [2024-04-26 08:43:31.177628] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:14.098 [2024-04-26 08:43:31.177700] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917192 ] 00:09:14.098 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.098 [2024-04-26 08:43:31.250227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:14.098 [2024-04-26 08:43:31.320421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.098 [2024-04-26 08:43:31.320424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.033 08:43:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:15.033 08:43:31 -- common/autotest_common.sh@850 -- # return 0 00:09:15.033 08:43:31 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:15.033 Malloc0 00:09:15.033 08:43:32 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:15.292 Malloc1 00:09:15.292 08:43:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@12 -- # local i 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.292 08:43:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:15.292 /dev/nbd0 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:15.550 08:43:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:15.550 08:43:32 -- common/autotest_common.sh@855 -- # local i 00:09:15.550 08:43:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:15.550 08:43:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:15.550 08:43:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:15.550 08:43:32 -- common/autotest_common.sh@859 -- # break 00:09:15.550 08:43:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:15.550 08:43:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:15.550 08:43:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:15.550 1+0 records in 00:09:15.550 1+0 records out 00:09:15.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026301 s, 15.6 MB/s 00:09:15.550 08:43:32 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.550 08:43:32 -- common/autotest_common.sh@872 -- # size=4096 00:09:15.550 08:43:32 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.550 08:43:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:15.550 08:43:32 -- common/autotest_common.sh@875 -- # return 0 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:15.550 /dev/nbd1 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:15.550 08:43:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:15.550 08:43:32 -- common/autotest_common.sh@855 -- # local i 00:09:15.550 08:43:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:15.550 08:43:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:15.550 08:43:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:15.550 08:43:32 -- common/autotest_common.sh@859 -- # break 00:09:15.550 08:43:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:15.550 08:43:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:15.550 08:43:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:15.550 1+0 records in 00:09:15.550 1+0 records out 00:09:15.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255183 s, 16.1 MB/s 00:09:15.550 08:43:32 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.550 08:43:32 -- common/autotest_common.sh@872 -- # size=4096 00:09:15.550 08:43:32 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:15.550 08:43:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:15.550 08:43:32 -- common/autotest_common.sh@875 -- # return 0 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.550 08:43:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:15.807 08:43:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:15.807 { 00:09:15.807 "nbd_device": "/dev/nbd0", 00:09:15.807 "bdev_name": "Malloc0" 00:09:15.807 }, 00:09:15.807 { 00:09:15.807 "nbd_device": "/dev/nbd1", 00:09:15.807 "bdev_name": "Malloc1" 00:09:15.807 } 00:09:15.807 ]' 00:09:15.807 08:43:32 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:15.807 { 00:09:15.807 "nbd_device": "/dev/nbd0", 00:09:15.807 "bdev_name": "Malloc0" 00:09:15.807 }, 00:09:15.807 { 00:09:15.807 "nbd_device": "/dev/nbd1", 00:09:15.807 "bdev_name": "Malloc1" 00:09:15.807 } 00:09:15.807 ]' 00:09:15.807 08:43:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.807 08:43:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:15.807 /dev/nbd1' 00:09:15.807 08:43:32 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:15.807 /dev/nbd1' 00:09:15.807 08:43:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@65 -- # count=2 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@95 -- # count=2 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:15.808 256+0 records in 00:09:15.808 256+0 records out 00:09:15.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010527 s, 99.6 MB/s 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:15.808 256+0 records in 00:09:15.808 256+0 records out 00:09:15.808 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019684 s, 53.3 MB/s 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.808 08:43:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:16.066 256+0 records in 00:09:16.066 256+0 records out 00:09:16.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188831 s, 55.5 MB/s 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@51 -- # local i 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@41 -- # break 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.066 08:43:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:16.324 08:43:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:16.324 08:43:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:16.324 08:43:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:16.324 08:43:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.324 08:43:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.324 08:43:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:16.324 08:43:33 -- bdev/nbd_common.sh@41 -- # break 00:09:16.324 08:43:33 -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.324 08:43:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:16.324 08:43:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.324 08:43:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@65 -- # true 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@65 -- # count=0 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@104 -- # count=0 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:16.582 08:43:33 -- bdev/nbd_common.sh@109 -- # return 0 00:09:16.582 08:43:33 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:16.840 08:43:33 -- event/event.sh@35 -- # sleep 3 00:09:17.098 [2024-04-26 08:43:34.087811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:17.098 [2024-04-26 08:43:34.157971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.098 [2024-04-26 08:43:34.157974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.098 [2024-04-26 08:43:34.199053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:17.098 [2024-04-26 08:43:34.199095] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:20.379 08:43:36 -- event/event.sh@23 -- # for i in {0..2} 00:09:20.379 08:43:36 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:20.379 spdk_app_start Round 1 00:09:20.379 08:43:36 -- event/event.sh@25 -- # waitforlisten 1917192 /var/tmp/spdk-nbd.sock 00:09:20.379 08:43:36 -- common/autotest_common.sh@817 -- # '[' -z 1917192 ']' 00:09:20.379 08:43:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:20.379 08:43:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:20.379 08:43:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:20.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:20.379 08:43:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:20.379 08:43:36 -- common/autotest_common.sh@10 -- # set +x 00:09:20.379 08:43:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:20.379 08:43:37 -- common/autotest_common.sh@850 -- # return 0 00:09:20.379 08:43:37 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:20.379 Malloc0 00:09:20.379 08:43:37 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:20.379 Malloc1 00:09:20.379 08:43:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@12 -- # local i 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:20.379 /dev/nbd0 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:20.379 08:43:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:20.379 08:43:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:20.379 08:43:37 -- common/autotest_common.sh@855 -- # local i 00:09:20.379 08:43:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:20.379 08:43:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:20.379 08:43:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:20.380 08:43:37 -- common/autotest_common.sh@859 -- # break 00:09:20.380 08:43:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:20.380 08:43:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:20.380 08:43:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:20.380 1+0 records in 00:09:20.380 1+0 records out 00:09:20.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021811 s, 18.8 MB/s 00:09:20.380 08:43:37 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.380 08:43:37 -- common/autotest_common.sh@872 -- # size=4096 00:09:20.380 08:43:37 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.640 08:43:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:20.640 08:43:37 -- common/autotest_common.sh@875 -- # return 0 00:09:20.640 08:43:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.640 08:43:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.640 08:43:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:20.640 /dev/nbd1 00:09:20.640 08:43:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:20.640 08:43:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:20.640 08:43:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:20.640 08:43:37 -- common/autotest_common.sh@855 -- # local i 00:09:20.640 08:43:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:20.640 08:43:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:20.640 08:43:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:20.640 08:43:37 -- common/autotest_common.sh@859 -- # break 00:09:20.640 08:43:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:20.640 08:43:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:20.640 08:43:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:20.640 1+0 records in 00:09:20.640 1+0 records out 00:09:20.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000139314 s, 29.4 MB/s 00:09:20.640 08:43:37 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.640 08:43:37 -- common/autotest_common.sh@872 -- # size=4096 00:09:20.640 08:43:37 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:20.640 08:43:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:20.640 08:43:37 -- common/autotest_common.sh@875 -- # return 0 00:09:20.640 08:43:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.640 08:43:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:20.640 08:43:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:20.640 08:43:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.640 08:43:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:20.910 08:43:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:20.910 { 00:09:20.910 "nbd_device": "/dev/nbd0", 00:09:20.910 "bdev_name": "Malloc0" 00:09:20.910 }, 00:09:20.910 { 00:09:20.910 "nbd_device": "/dev/nbd1", 00:09:20.910 "bdev_name": "Malloc1" 00:09:20.910 } 00:09:20.910 ]' 00:09:20.910 08:43:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:20.910 { 00:09:20.910 "nbd_device": "/dev/nbd0", 00:09:20.910 "bdev_name": "Malloc0" 00:09:20.910 }, 00:09:20.910 { 00:09:20.910 "nbd_device": "/dev/nbd1", 00:09:20.910 "bdev_name": "Malloc1" 00:09:20.910 } 00:09:20.910 ]' 00:09:20.910 08:43:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:20.910 /dev/nbd1' 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:20.910 /dev/nbd1' 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@65 -- # count=2 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@95 -- # count=2 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:20.910 256+0 records in 00:09:20.910 256+0 records out 00:09:20.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112996 s, 92.8 MB/s 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:20.910 256+0 records in 00:09:20.910 256+0 records out 00:09:20.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164929 s, 63.6 MB/s 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:20.910 256+0 records in 00:09:20.910 256+0 records out 00:09:20.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210705 s, 49.8 MB/s 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@51 -- # local i 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:20.910 08:43:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:21.183 08:43:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:21.183 08:43:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:21.183 08:43:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:21.183 08:43:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.183 08:43:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.183 08:43:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:21.183 08:43:38 -- bdev/nbd_common.sh@41 -- # break 00:09:21.183 08:43:38 -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.183 08:43:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.183 08:43:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:21.444 08:43:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:21.444 08:43:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:21.444 08:43:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:21.444 08:43:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:21.444 08:43:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:21.444 08:43:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:21.444 08:43:38 -- bdev/nbd_common.sh@41 -- # break 00:09:21.444 08:43:38 -- bdev/nbd_common.sh@45 -- # return 0 00:09:21.444 08:43:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:21.444 08:43:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.444 08:43:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@65 -- # true 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@65 -- # count=0 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@104 -- # count=0 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:21.702 08:43:38 -- bdev/nbd_common.sh@109 -- # return 0 00:09:21.702 08:43:38 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:21.960 08:43:38 -- event/event.sh@35 -- # sleep 3 00:09:21.960 [2024-04-26 08:43:39.157150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:22.218 [2024-04-26 08:43:39.219144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.218 [2024-04-26 08:43:39.219147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.218 [2024-04-26 08:43:39.260861] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:22.218 [2024-04-26 08:43:39.260904] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:24.743 08:43:41 -- event/event.sh@23 -- # for i in {0..2} 00:09:24.743 08:43:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:24.743 spdk_app_start Round 2 00:09:24.743 08:43:41 -- event/event.sh@25 -- # waitforlisten 1917192 /var/tmp/spdk-nbd.sock 00:09:24.743 08:43:41 -- common/autotest_common.sh@817 -- # '[' -z 1917192 ']' 00:09:24.743 08:43:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:24.743 08:43:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:24.743 08:43:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:24.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:24.743 08:43:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:24.743 08:43:41 -- common/autotest_common.sh@10 -- # set +x 00:09:25.001 08:43:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:25.001 08:43:42 -- common/autotest_common.sh@850 -- # return 0 00:09:25.001 08:43:42 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:25.259 Malloc0 00:09:25.259 08:43:42 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:25.259 Malloc1 00:09:25.259 08:43:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@12 -- # local i 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.259 08:43:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:25.518 /dev/nbd0 00:09:25.518 08:43:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:25.518 08:43:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:25.518 08:43:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:25.518 08:43:42 -- common/autotest_common.sh@855 -- # local i 00:09:25.518 08:43:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:25.518 08:43:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:25.518 08:43:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:25.518 08:43:42 -- common/autotest_common.sh@859 -- # break 00:09:25.518 08:43:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:25.518 08:43:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:25.518 08:43:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:25.518 1+0 records in 00:09:25.518 1+0 records out 00:09:25.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259562 s, 15.8 MB/s 00:09:25.518 08:43:42 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:25.518 08:43:42 -- common/autotest_common.sh@872 -- # size=4096 00:09:25.518 08:43:42 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:25.518 08:43:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:25.518 08:43:42 -- common/autotest_common.sh@875 -- # return 0 00:09:25.518 08:43:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:25.518 08:43:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.518 08:43:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:25.775 /dev/nbd1 00:09:25.775 08:43:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:25.775 08:43:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:25.775 08:43:42 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:25.775 08:43:42 -- common/autotest_common.sh@855 -- # local i 00:09:25.775 08:43:42 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:25.775 08:43:42 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:25.775 08:43:42 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:25.775 08:43:42 -- common/autotest_common.sh@859 -- # break 00:09:25.775 08:43:42 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:25.775 08:43:42 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:25.775 08:43:42 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:25.775 1+0 records in 00:09:25.775 1+0 records out 00:09:25.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161064 s, 25.4 MB/s 00:09:25.775 08:43:42 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:25.775 08:43:42 -- common/autotest_common.sh@872 -- # size=4096 00:09:25.775 08:43:42 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:25.775 08:43:42 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:25.775 08:43:42 -- common/autotest_common.sh@875 -- # return 0 00:09:25.775 08:43:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:25.775 08:43:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:25.775 08:43:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:25.775 08:43:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.775 08:43:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:26.032 08:43:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:26.032 { 00:09:26.032 "nbd_device": "/dev/nbd0", 00:09:26.032 "bdev_name": "Malloc0" 00:09:26.032 }, 00:09:26.032 { 00:09:26.032 "nbd_device": "/dev/nbd1", 00:09:26.032 "bdev_name": "Malloc1" 00:09:26.032 } 00:09:26.032 ]' 00:09:26.032 08:43:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:26.032 { 00:09:26.032 "nbd_device": "/dev/nbd0", 00:09:26.032 "bdev_name": "Malloc0" 00:09:26.032 }, 00:09:26.032 { 00:09:26.032 "nbd_device": "/dev/nbd1", 00:09:26.032 "bdev_name": "Malloc1" 00:09:26.033 } 00:09:26.033 ]' 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:26.033 /dev/nbd1' 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:26.033 /dev/nbd1' 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@65 -- # count=2 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@95 -- # count=2 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:26.033 256+0 records in 00:09:26.033 256+0 records out 00:09:26.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114073 s, 91.9 MB/s 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:26.033 256+0 records in 00:09:26.033 256+0 records out 00:09:26.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195622 s, 53.6 MB/s 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:26.033 256+0 records in 00:09:26.033 256+0 records out 00:09:26.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139023 s, 75.4 MB/s 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@51 -- # local i 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.033 08:43:43 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:26.290 08:43:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:26.290 08:43:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:26.290 08:43:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:26.290 08:43:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.290 08:43:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.290 08:43:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:26.290 08:43:43 -- bdev/nbd_common.sh@41 -- # break 00:09:26.290 08:43:43 -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.290 08:43:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.290 08:43:43 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@41 -- # break 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:26.547 08:43:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.809 08:43:43 -- bdev/nbd_common.sh@65 -- # true 00:09:26.809 08:43:43 -- bdev/nbd_common.sh@65 -- # count=0 00:09:26.809 08:43:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:26.809 08:43:43 -- bdev/nbd_common.sh@104 -- # count=0 00:09:26.809 08:43:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:26.809 08:43:43 -- bdev/nbd_common.sh@109 -- # return 0 00:09:26.809 08:43:43 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:26.809 08:43:43 -- event/event.sh@35 -- # sleep 3 00:09:27.067 [2024-04-26 08:43:44.189227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.067 [2024-04-26 08:43:44.249796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.067 [2024-04-26 08:43:44.249798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.067 [2024-04-26 08:43:44.290560] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:27.067 [2024-04-26 08:43:44.290605] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:30.339 08:43:46 -- event/event.sh@38 -- # waitforlisten 1917192 /var/tmp/spdk-nbd.sock 00:09:30.339 08:43:46 -- common/autotest_common.sh@817 -- # '[' -z 1917192 ']' 00:09:30.339 08:43:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:30.339 08:43:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:30.339 08:43:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:30.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:30.339 08:43:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:30.339 08:43:46 -- common/autotest_common.sh@10 -- # set +x 00:09:30.339 08:43:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:30.339 08:43:47 -- common/autotest_common.sh@850 -- # return 0 00:09:30.339 08:43:47 -- event/event.sh@39 -- # killprocess 1917192 00:09:30.339 08:43:47 -- common/autotest_common.sh@936 -- # '[' -z 1917192 ']' 00:09:30.339 08:43:47 -- common/autotest_common.sh@940 -- # kill -0 1917192 00:09:30.339 08:43:47 -- common/autotest_common.sh@941 -- # uname 00:09:30.339 08:43:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:30.339 08:43:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1917192 00:09:30.339 08:43:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:30.339 08:43:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:30.339 08:43:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1917192' 00:09:30.339 killing process with pid 1917192 00:09:30.339 08:43:47 -- common/autotest_common.sh@955 -- # kill 1917192 00:09:30.339 08:43:47 -- common/autotest_common.sh@960 -- # wait 1917192 00:09:30.339 spdk_app_start is called in Round 0. 00:09:30.339 Shutdown signal received, stop current app iteration 00:09:30.339 Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 reinitialization... 00:09:30.339 spdk_app_start is called in Round 1. 00:09:30.339 Shutdown signal received, stop current app iteration 00:09:30.339 Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 reinitialization... 00:09:30.339 spdk_app_start is called in Round 2. 00:09:30.339 Shutdown signal received, stop current app iteration 00:09:30.339 Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 reinitialization... 00:09:30.339 spdk_app_start is called in Round 3. 00:09:30.339 Shutdown signal received, stop current app iteration 00:09:30.339 08:43:47 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:30.339 08:43:47 -- event/event.sh@42 -- # return 0 00:09:30.339 00:09:30.339 real 0m16.247s 00:09:30.339 user 0m34.453s 00:09:30.339 sys 0m2.980s 00:09:30.339 08:43:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:30.339 08:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.339 ************************************ 00:09:30.339 END TEST app_repeat 00:09:30.339 ************************************ 00:09:30.339 08:43:47 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:30.339 08:43:47 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:30.339 08:43:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:30.339 08:43:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:30.339 08:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.339 ************************************ 00:09:30.339 START TEST cpu_locks 00:09:30.339 ************************************ 00:09:30.339 08:43:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:30.598 * Looking for test storage... 00:09:30.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:30.598 08:43:47 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:30.598 08:43:47 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:30.598 08:43:47 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:30.598 08:43:47 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:30.598 08:43:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:30.598 08:43:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:30.598 08:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.598 ************************************ 00:09:30.598 START TEST default_locks 00:09:30.598 ************************************ 00:09:30.598 08:43:47 -- common/autotest_common.sh@1111 -- # default_locks 00:09:30.598 08:43:47 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1920251 00:09:30.598 08:43:47 -- event/cpu_locks.sh@47 -- # waitforlisten 1920251 00:09:30.598 08:43:47 -- common/autotest_common.sh@817 -- # '[' -z 1920251 ']' 00:09:30.598 08:43:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.598 08:43:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:30.598 08:43:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.598 08:43:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:30.598 08:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:30.598 08:43:47 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:30.598 [2024-04-26 08:43:47.828024] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:30.598 [2024-04-26 08:43:47.828069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920251 ] 00:09:30.857 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.857 [2024-04-26 08:43:47.896253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.857 [2024-04-26 08:43:47.967613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.423 08:43:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:31.423 08:43:48 -- common/autotest_common.sh@850 -- # return 0 00:09:31.423 08:43:48 -- event/cpu_locks.sh@49 -- # locks_exist 1920251 00:09:31.423 08:43:48 -- event/cpu_locks.sh@22 -- # lslocks -p 1920251 00:09:31.423 08:43:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:31.683 lslocks: write error 00:09:31.683 08:43:48 -- event/cpu_locks.sh@50 -- # killprocess 1920251 00:09:31.683 08:43:48 -- common/autotest_common.sh@936 -- # '[' -z 1920251 ']' 00:09:31.683 08:43:48 -- common/autotest_common.sh@940 -- # kill -0 1920251 00:09:31.683 08:43:48 -- common/autotest_common.sh@941 -- # uname 00:09:31.683 08:43:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:31.683 08:43:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1920251 00:09:31.683 08:43:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:31.683 08:43:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:31.683 08:43:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1920251' 00:09:31.683 killing process with pid 1920251 00:09:31.683 08:43:48 -- common/autotest_common.sh@955 -- # kill 1920251 00:09:31.683 08:43:48 -- common/autotest_common.sh@960 -- # wait 1920251 00:09:32.250 08:43:49 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1920251 00:09:32.250 08:43:49 -- common/autotest_common.sh@638 -- # local es=0 00:09:32.250 08:43:49 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1920251 00:09:32.250 08:43:49 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:09:32.250 08:43:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:32.250 08:43:49 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:09:32.250 08:43:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:32.250 08:43:49 -- common/autotest_common.sh@641 -- # waitforlisten 1920251 00:09:32.250 08:43:49 -- common/autotest_common.sh@817 -- # '[' -z 1920251 ']' 00:09:32.250 08:43:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.250 08:43:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:32.250 08:43:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.250 08:43:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:32.250 08:43:49 -- common/autotest_common.sh@10 -- # set +x 00:09:32.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1920251) - No such process 00:09:32.250 ERROR: process (pid: 1920251) is no longer running 00:09:32.250 08:43:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:32.250 08:43:49 -- common/autotest_common.sh@850 -- # return 1 00:09:32.250 08:43:49 -- common/autotest_common.sh@641 -- # es=1 00:09:32.250 08:43:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:32.250 08:43:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:32.250 08:43:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:32.250 08:43:49 -- event/cpu_locks.sh@54 -- # no_locks 00:09:32.250 08:43:49 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:32.250 08:43:49 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:32.250 08:43:49 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:32.250 00:09:32.250 real 0m1.489s 00:09:32.250 user 0m1.531s 00:09:32.250 sys 0m0.500s 00:09:32.250 08:43:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:32.250 08:43:49 -- common/autotest_common.sh@10 -- # set +x 00:09:32.250 ************************************ 00:09:32.250 END TEST default_locks 00:09:32.250 ************************************ 00:09:32.250 08:43:49 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:32.250 08:43:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:32.250 08:43:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.250 08:43:49 -- common/autotest_common.sh@10 -- # set +x 00:09:32.250 ************************************ 00:09:32.250 START TEST default_locks_via_rpc 00:09:32.250 ************************************ 00:09:32.250 08:43:49 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:09:32.250 08:43:49 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1920642 00:09:32.250 08:43:49 -- event/cpu_locks.sh@63 -- # waitforlisten 1920642 00:09:32.250 08:43:49 -- common/autotest_common.sh@817 -- # '[' -z 1920642 ']' 00:09:32.250 08:43:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.250 08:43:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:32.250 08:43:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.250 08:43:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:32.250 08:43:49 -- common/autotest_common.sh@10 -- # set +x 00:09:32.250 08:43:49 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:32.509 [2024-04-26 08:43:49.504357] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:32.509 [2024-04-26 08:43:49.504402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920642 ] 00:09:32.509 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.509 [2024-04-26 08:43:49.572822] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.509 [2024-04-26 08:43:49.645425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.077 08:43:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:33.077 08:43:50 -- common/autotest_common.sh@850 -- # return 0 00:09:33.077 08:43:50 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:33.077 08:43:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.077 08:43:50 -- common/autotest_common.sh@10 -- # set +x 00:09:33.077 08:43:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.077 08:43:50 -- event/cpu_locks.sh@67 -- # no_locks 00:09:33.077 08:43:50 -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:33.077 08:43:50 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:33.077 08:43:50 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:33.077 08:43:50 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:33.077 08:43:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:33.077 08:43:50 -- common/autotest_common.sh@10 -- # set +x 00:09:33.077 08:43:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:33.077 08:43:50 -- event/cpu_locks.sh@71 -- # locks_exist 1920642 00:09:33.077 08:43:50 -- event/cpu_locks.sh@22 -- # lslocks -p 1920642 00:09:33.077 08:43:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:33.644 08:43:50 -- event/cpu_locks.sh@73 -- # killprocess 1920642 00:09:33.644 08:43:50 -- common/autotest_common.sh@936 -- # '[' -z 1920642 ']' 00:09:33.644 08:43:50 -- common/autotest_common.sh@940 -- # kill -0 1920642 00:09:33.644 08:43:50 -- common/autotest_common.sh@941 -- # uname 00:09:33.644 08:43:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:33.644 08:43:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1920642 00:09:33.644 08:43:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:33.644 08:43:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:33.644 08:43:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1920642' 00:09:33.644 killing process with pid 1920642 00:09:33.644 08:43:50 -- common/autotest_common.sh@955 -- # kill 1920642 00:09:33.644 08:43:50 -- common/autotest_common.sh@960 -- # wait 1920642 00:09:33.918 00:09:33.918 real 0m1.551s 00:09:33.918 user 0m1.615s 00:09:33.918 sys 0m0.511s 00:09:33.918 08:43:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:33.918 08:43:51 -- common/autotest_common.sh@10 -- # set +x 00:09:33.918 ************************************ 00:09:33.918 END TEST default_locks_via_rpc 00:09:33.918 ************************************ 00:09:33.918 08:43:51 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:33.918 08:43:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:33.918 08:43:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.918 08:43:51 -- common/autotest_common.sh@10 -- # set +x 00:09:34.179 ************************************ 00:09:34.179 START TEST non_locking_app_on_locked_coremask 00:09:34.179 ************************************ 00:09:34.179 08:43:51 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:09:34.179 08:43:51 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1921040 00:09:34.179 08:43:51 -- event/cpu_locks.sh@81 -- # waitforlisten 1921040 /var/tmp/spdk.sock 00:09:34.179 08:43:51 -- common/autotest_common.sh@817 -- # '[' -z 1921040 ']' 00:09:34.179 08:43:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.179 08:43:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:34.179 08:43:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.179 08:43:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:34.179 08:43:51 -- common/autotest_common.sh@10 -- # set +x 00:09:34.179 08:43:51 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:34.179 [2024-04-26 08:43:51.227171] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:34.179 [2024-04-26 08:43:51.227215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921040 ] 00:09:34.179 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.179 [2024-04-26 08:43:51.295470] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.179 [2024-04-26 08:43:51.367402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.115 08:43:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:35.115 08:43:52 -- common/autotest_common.sh@850 -- # return 0 00:09:35.115 08:43:52 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1921060 00:09:35.115 08:43:52 -- event/cpu_locks.sh@85 -- # waitforlisten 1921060 /var/tmp/spdk2.sock 00:09:35.115 08:43:52 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:35.115 08:43:52 -- common/autotest_common.sh@817 -- # '[' -z 1921060 ']' 00:09:35.115 08:43:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:35.115 08:43:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:35.115 08:43:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:35.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:35.115 08:43:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:35.115 08:43:52 -- common/autotest_common.sh@10 -- # set +x 00:09:35.115 [2024-04-26 08:43:52.032924] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:35.115 [2024-04-26 08:43:52.032972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921060 ] 00:09:35.115 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.115 [2024-04-26 08:43:52.130003] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:35.115 [2024-04-26 08:43:52.130029] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.115 [2024-04-26 08:43:52.272524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.681 08:43:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:35.681 08:43:52 -- common/autotest_common.sh@850 -- # return 0 00:09:35.681 08:43:52 -- event/cpu_locks.sh@87 -- # locks_exist 1921040 00:09:35.681 08:43:52 -- event/cpu_locks.sh@22 -- # lslocks -p 1921040 00:09:35.681 08:43:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:37.059 lslocks: write error 00:09:37.059 08:43:54 -- event/cpu_locks.sh@89 -- # killprocess 1921040 00:09:37.059 08:43:54 -- common/autotest_common.sh@936 -- # '[' -z 1921040 ']' 00:09:37.059 08:43:54 -- common/autotest_common.sh@940 -- # kill -0 1921040 00:09:37.059 08:43:54 -- common/autotest_common.sh@941 -- # uname 00:09:37.059 08:43:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:37.059 08:43:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1921040 00:09:37.059 08:43:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:37.059 08:43:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:37.059 08:43:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1921040' 00:09:37.059 killing process with pid 1921040 00:09:37.059 08:43:54 -- common/autotest_common.sh@955 -- # kill 1921040 00:09:37.059 08:43:54 -- common/autotest_common.sh@960 -- # wait 1921040 00:09:37.627 08:43:54 -- event/cpu_locks.sh@90 -- # killprocess 1921060 00:09:37.627 08:43:54 -- common/autotest_common.sh@936 -- # '[' -z 1921060 ']' 00:09:37.627 08:43:54 -- common/autotest_common.sh@940 -- # kill -0 1921060 00:09:37.627 08:43:54 -- common/autotest_common.sh@941 -- # uname 00:09:37.627 08:43:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:37.627 08:43:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1921060 00:09:37.885 08:43:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:37.885 08:43:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:37.885 08:43:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1921060' 00:09:37.885 killing process with pid 1921060 00:09:37.885 08:43:54 -- common/autotest_common.sh@955 -- # kill 1921060 00:09:37.885 08:43:54 -- common/autotest_common.sh@960 -- # wait 1921060 00:09:38.145 00:09:38.145 real 0m4.032s 00:09:38.145 user 0m4.275s 00:09:38.145 sys 0m1.349s 00:09:38.145 08:43:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:38.145 08:43:55 -- common/autotest_common.sh@10 -- # set +x 00:09:38.145 ************************************ 00:09:38.145 END TEST non_locking_app_on_locked_coremask 00:09:38.145 ************************************ 00:09:38.145 08:43:55 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:38.145 08:43:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:38.145 08:43:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.145 08:43:55 -- common/autotest_common.sh@10 -- # set +x 00:09:38.403 ************************************ 00:09:38.403 START TEST locking_app_on_unlocked_coremask 00:09:38.404 ************************************ 00:09:38.404 08:43:55 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:09:38.404 08:43:55 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1921688 00:09:38.404 08:43:55 -- event/cpu_locks.sh@99 -- # waitforlisten 1921688 /var/tmp/spdk.sock 00:09:38.404 08:43:55 -- common/autotest_common.sh@817 -- # '[' -z 1921688 ']' 00:09:38.404 08:43:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.404 08:43:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:38.404 08:43:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.404 08:43:55 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:38.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.404 08:43:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:38.404 08:43:55 -- common/autotest_common.sh@10 -- # set +x 00:09:38.404 [2024-04-26 08:43:55.463211] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:38.404 [2024-04-26 08:43:55.463263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921688 ] 00:09:38.404 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.404 [2024-04-26 08:43:55.533200] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:38.404 [2024-04-26 08:43:55.533224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.404 [2024-04-26 08:43:55.607049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.340 08:43:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:39.340 08:43:56 -- common/autotest_common.sh@850 -- # return 0 00:09:39.340 08:43:56 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1921898 00:09:39.340 08:43:56 -- event/cpu_locks.sh@103 -- # waitforlisten 1921898 /var/tmp/spdk2.sock 00:09:39.340 08:43:56 -- common/autotest_common.sh@817 -- # '[' -z 1921898 ']' 00:09:39.340 08:43:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:39.340 08:43:56 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:39.340 08:43:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:39.340 08:43:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:39.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:39.340 08:43:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:39.340 08:43:56 -- common/autotest_common.sh@10 -- # set +x 00:09:39.340 [2024-04-26 08:43:56.299334] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:39.340 [2024-04-26 08:43:56.299385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921898 ] 00:09:39.340 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.340 [2024-04-26 08:43:56.394004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.340 [2024-04-26 08:43:56.529320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.907 08:43:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:39.907 08:43:57 -- common/autotest_common.sh@850 -- # return 0 00:09:39.907 08:43:57 -- event/cpu_locks.sh@105 -- # locks_exist 1921898 00:09:39.907 08:43:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:39.907 08:43:57 -- event/cpu_locks.sh@22 -- # lslocks -p 1921898 00:09:40.841 lslocks: write error 00:09:40.841 08:43:57 -- event/cpu_locks.sh@107 -- # killprocess 1921688 00:09:40.841 08:43:57 -- common/autotest_common.sh@936 -- # '[' -z 1921688 ']' 00:09:40.841 08:43:57 -- common/autotest_common.sh@940 -- # kill -0 1921688 00:09:40.841 08:43:57 -- common/autotest_common.sh@941 -- # uname 00:09:40.841 08:43:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:40.841 08:43:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1921688 00:09:40.841 08:43:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:40.841 08:43:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:40.841 08:43:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1921688' 00:09:40.841 killing process with pid 1921688 00:09:40.841 08:43:58 -- common/autotest_common.sh@955 -- # kill 1921688 00:09:40.841 08:43:58 -- common/autotest_common.sh@960 -- # wait 1921688 00:09:41.778 08:43:58 -- event/cpu_locks.sh@108 -- # killprocess 1921898 00:09:41.778 08:43:58 -- common/autotest_common.sh@936 -- # '[' -z 1921898 ']' 00:09:41.778 08:43:58 -- common/autotest_common.sh@940 -- # kill -0 1921898 00:09:41.778 08:43:58 -- common/autotest_common.sh@941 -- # uname 00:09:41.778 08:43:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:41.778 08:43:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1921898 00:09:41.778 08:43:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:41.778 08:43:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:41.778 08:43:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1921898' 00:09:41.778 killing process with pid 1921898 00:09:41.778 08:43:58 -- common/autotest_common.sh@955 -- # kill 1921898 00:09:41.778 08:43:58 -- common/autotest_common.sh@960 -- # wait 1921898 00:09:42.038 00:09:42.038 real 0m3.633s 00:09:42.038 user 0m3.861s 00:09:42.038 sys 0m1.206s 00:09:42.038 08:43:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:42.038 08:43:59 -- common/autotest_common.sh@10 -- # set +x 00:09:42.038 ************************************ 00:09:42.038 END TEST locking_app_on_unlocked_coremask 00:09:42.038 ************************************ 00:09:42.038 08:43:59 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:42.038 08:43:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:42.038 08:43:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:42.038 08:43:59 -- common/autotest_common.sh@10 -- # set +x 00:09:42.038 ************************************ 00:09:42.038 START TEST locking_app_on_locked_coremask 00:09:42.038 ************************************ 00:09:42.038 08:43:59 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:09:42.038 08:43:59 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1922466 00:09:42.038 08:43:59 -- event/cpu_locks.sh@116 -- # waitforlisten 1922466 /var/tmp/spdk.sock 00:09:42.038 08:43:59 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:42.038 08:43:59 -- common/autotest_common.sh@817 -- # '[' -z 1922466 ']' 00:09:42.038 08:43:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.038 08:43:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:42.038 08:43:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.038 08:43:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:42.038 08:43:59 -- common/autotest_common.sh@10 -- # set +x 00:09:42.297 [2024-04-26 08:43:59.324946] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:42.298 [2024-04-26 08:43:59.324994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922466 ] 00:09:42.298 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.298 [2024-04-26 08:43:59.393773] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.298 [2024-04-26 08:43:59.458936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.233 08:44:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:43.233 08:44:00 -- common/autotest_common.sh@850 -- # return 0 00:09:43.233 08:44:00 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1922577 00:09:43.233 08:44:00 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1922577 /var/tmp/spdk2.sock 00:09:43.233 08:44:00 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:43.233 08:44:00 -- common/autotest_common.sh@638 -- # local es=0 00:09:43.233 08:44:00 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1922577 /var/tmp/spdk2.sock 00:09:43.233 08:44:00 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:09:43.233 08:44:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:43.233 08:44:00 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:09:43.233 08:44:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:43.233 08:44:00 -- common/autotest_common.sh@641 -- # waitforlisten 1922577 /var/tmp/spdk2.sock 00:09:43.233 08:44:00 -- common/autotest_common.sh@817 -- # '[' -z 1922577 ']' 00:09:43.233 08:44:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:43.233 08:44:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:43.233 08:44:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:43.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:43.233 08:44:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:43.233 08:44:00 -- common/autotest_common.sh@10 -- # set +x 00:09:43.233 [2024-04-26 08:44:00.172077] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:43.233 [2024-04-26 08:44:00.172130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1922577 ] 00:09:43.233 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.233 [2024-04-26 08:44:00.274849] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1922466 has claimed it. 00:09:43.233 [2024-04-26 08:44:00.274891] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:43.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1922577) - No such process 00:09:43.812 ERROR: process (pid: 1922577) is no longer running 00:09:43.812 08:44:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:43.812 08:44:00 -- common/autotest_common.sh@850 -- # return 1 00:09:43.812 08:44:00 -- common/autotest_common.sh@641 -- # es=1 00:09:43.812 08:44:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:43.812 08:44:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:43.812 08:44:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:43.812 08:44:00 -- event/cpu_locks.sh@122 -- # locks_exist 1922466 00:09:43.812 08:44:00 -- event/cpu_locks.sh@22 -- # lslocks -p 1922466 00:09:43.812 08:44:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:44.100 lslocks: write error 00:09:44.100 08:44:01 -- event/cpu_locks.sh@124 -- # killprocess 1922466 00:09:44.100 08:44:01 -- common/autotest_common.sh@936 -- # '[' -z 1922466 ']' 00:09:44.100 08:44:01 -- common/autotest_common.sh@940 -- # kill -0 1922466 00:09:44.100 08:44:01 -- common/autotest_common.sh@941 -- # uname 00:09:44.100 08:44:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:44.100 08:44:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1922466 00:09:44.359 08:44:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:44.359 08:44:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:44.359 08:44:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1922466' 00:09:44.359 killing process with pid 1922466 00:09:44.359 08:44:01 -- common/autotest_common.sh@955 -- # kill 1922466 00:09:44.359 08:44:01 -- common/autotest_common.sh@960 -- # wait 1922466 00:09:44.617 00:09:44.617 real 0m2.425s 00:09:44.617 user 0m2.620s 00:09:44.617 sys 0m0.749s 00:09:44.617 08:44:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:44.617 08:44:01 -- common/autotest_common.sh@10 -- # set +x 00:09:44.617 ************************************ 00:09:44.617 END TEST locking_app_on_locked_coremask 00:09:44.617 ************************************ 00:09:44.617 08:44:01 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:44.617 08:44:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:44.617 08:44:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.617 08:44:01 -- common/autotest_common.sh@10 -- # set +x 00:09:44.876 ************************************ 00:09:44.876 START TEST locking_overlapped_coremask 00:09:44.876 ************************************ 00:09:44.876 08:44:01 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:09:44.876 08:44:01 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1923027 00:09:44.876 08:44:01 -- event/cpu_locks.sh@133 -- # waitforlisten 1923027 /var/tmp/spdk.sock 00:09:44.876 08:44:01 -- common/autotest_common.sh@817 -- # '[' -z 1923027 ']' 00:09:44.876 08:44:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.876 08:44:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:44.876 08:44:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.876 08:44:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:44.876 08:44:01 -- common/autotest_common.sh@10 -- # set +x 00:09:44.876 08:44:01 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:44.876 [2024-04-26 08:44:01.956479] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:44.876 [2024-04-26 08:44:01.956525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923027 ] 00:09:44.876 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.876 [2024-04-26 08:44:02.025632] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:44.876 [2024-04-26 08:44:02.098460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.876 [2024-04-26 08:44:02.098539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.876 [2024-04-26 08:44:02.098542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.812 08:44:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:45.812 08:44:02 -- common/autotest_common.sh@850 -- # return 0 00:09:45.812 08:44:02 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1923052 00:09:45.812 08:44:02 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1923052 /var/tmp/spdk2.sock 00:09:45.812 08:44:02 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:45.812 08:44:02 -- common/autotest_common.sh@638 -- # local es=0 00:09:45.813 08:44:02 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1923052 /var/tmp/spdk2.sock 00:09:45.813 08:44:02 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:09:45.813 08:44:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:45.813 08:44:02 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:09:45.813 08:44:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:45.813 08:44:02 -- common/autotest_common.sh@641 -- # waitforlisten 1923052 /var/tmp/spdk2.sock 00:09:45.813 08:44:02 -- common/autotest_common.sh@817 -- # '[' -z 1923052 ']' 00:09:45.813 08:44:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:45.813 08:44:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:45.813 08:44:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:45.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:45.813 08:44:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:45.813 08:44:02 -- common/autotest_common.sh@10 -- # set +x 00:09:45.813 [2024-04-26 08:44:02.794741] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:45.813 [2024-04-26 08:44:02.794791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923052 ] 00:09:45.813 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.813 [2024-04-26 08:44:02.894199] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1923027 has claimed it. 00:09:45.813 [2024-04-26 08:44:02.894236] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:46.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1923052) - No such process 00:09:46.381 ERROR: process (pid: 1923052) is no longer running 00:09:46.381 08:44:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:46.381 08:44:03 -- common/autotest_common.sh@850 -- # return 1 00:09:46.381 08:44:03 -- common/autotest_common.sh@641 -- # es=1 00:09:46.381 08:44:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:46.381 08:44:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:46.381 08:44:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:46.381 08:44:03 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:46.381 08:44:03 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:46.381 08:44:03 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:46.381 08:44:03 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:46.381 08:44:03 -- event/cpu_locks.sh@141 -- # killprocess 1923027 00:09:46.381 08:44:03 -- common/autotest_common.sh@936 -- # '[' -z 1923027 ']' 00:09:46.381 08:44:03 -- common/autotest_common.sh@940 -- # kill -0 1923027 00:09:46.381 08:44:03 -- common/autotest_common.sh@941 -- # uname 00:09:46.381 08:44:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:46.381 08:44:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1923027 00:09:46.381 08:44:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:46.381 08:44:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:46.381 08:44:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1923027' 00:09:46.381 killing process with pid 1923027 00:09:46.381 08:44:03 -- common/autotest_common.sh@955 -- # kill 1923027 00:09:46.381 08:44:03 -- common/autotest_common.sh@960 -- # wait 1923027 00:09:46.639 00:09:46.639 real 0m1.896s 00:09:46.639 user 0m5.275s 00:09:46.639 sys 0m0.443s 00:09:46.639 08:44:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:46.640 08:44:03 -- common/autotest_common.sh@10 -- # set +x 00:09:46.640 ************************************ 00:09:46.640 END TEST locking_overlapped_coremask 00:09:46.640 ************************************ 00:09:46.640 08:44:03 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:46.640 08:44:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:46.640 08:44:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.640 08:44:03 -- common/autotest_common.sh@10 -- # set +x 00:09:46.898 ************************************ 00:09:46.898 START TEST locking_overlapped_coremask_via_rpc 00:09:46.898 ************************************ 00:09:46.898 08:44:04 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:09:46.898 08:44:04 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1923353 00:09:46.898 08:44:04 -- event/cpu_locks.sh@149 -- # waitforlisten 1923353 /var/tmp/spdk.sock 00:09:46.898 08:44:04 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:46.898 08:44:04 -- common/autotest_common.sh@817 -- # '[' -z 1923353 ']' 00:09:46.898 08:44:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.898 08:44:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:46.898 08:44:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.898 08:44:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:46.898 08:44:04 -- common/autotest_common.sh@10 -- # set +x 00:09:46.898 [2024-04-26 08:44:04.073185] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:46.898 [2024-04-26 08:44:04.073233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923353 ] 00:09:46.898 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.898 [2024-04-26 08:44:04.145228] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:46.898 [2024-04-26 08:44:04.145254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.156 [2024-04-26 08:44:04.214709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.156 [2024-04-26 08:44:04.214804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.156 [2024-04-26 08:44:04.214807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.723 08:44:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:47.723 08:44:04 -- common/autotest_common.sh@850 -- # return 0 00:09:47.723 08:44:04 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1923552 00:09:47.723 08:44:04 -- event/cpu_locks.sh@153 -- # waitforlisten 1923552 /var/tmp/spdk2.sock 00:09:47.723 08:44:04 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:47.723 08:44:04 -- common/autotest_common.sh@817 -- # '[' -z 1923552 ']' 00:09:47.723 08:44:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:47.723 08:44:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:47.723 08:44:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:47.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:47.723 08:44:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:47.723 08:44:04 -- common/autotest_common.sh@10 -- # set +x 00:09:47.723 [2024-04-26 08:44:04.915544] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:47.723 [2024-04-26 08:44:04.915600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1923552 ] 00:09:47.723 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.981 [2024-04-26 08:44:05.014506] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:47.981 [2024-04-26 08:44:05.014537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.981 [2024-04-26 08:44:05.152519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.981 [2024-04-26 08:44:05.156499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.981 [2024-04-26 08:44:05.156500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:48.549 08:44:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:48.549 08:44:05 -- common/autotest_common.sh@850 -- # return 0 00:09:48.549 08:44:05 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:48.549 08:44:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.549 08:44:05 -- common/autotest_common.sh@10 -- # set +x 00:09:48.549 08:44:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.549 08:44:05 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:48.549 08:44:05 -- common/autotest_common.sh@638 -- # local es=0 00:09:48.549 08:44:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:48.549 08:44:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:09:48.549 08:44:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:48.549 08:44:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:09:48.549 08:44:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:48.549 08:44:05 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:48.549 08:44:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.549 08:44:05 -- common/autotest_common.sh@10 -- # set +x 00:09:48.549 [2024-04-26 08:44:05.743521] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1923353 has claimed it. 00:09:48.549 request: 00:09:48.549 { 00:09:48.549 "method": "framework_enable_cpumask_locks", 00:09:48.549 "req_id": 1 00:09:48.549 } 00:09:48.549 Got JSON-RPC error response 00:09:48.549 response: 00:09:48.549 { 00:09:48.549 "code": -32603, 00:09:48.549 "message": "Failed to claim CPU core: 2" 00:09:48.549 } 00:09:48.549 08:44:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:09:48.549 08:44:05 -- common/autotest_common.sh@641 -- # es=1 00:09:48.549 08:44:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:48.549 08:44:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:48.549 08:44:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:48.549 08:44:05 -- event/cpu_locks.sh@158 -- # waitforlisten 1923353 /var/tmp/spdk.sock 00:09:48.549 08:44:05 -- common/autotest_common.sh@817 -- # '[' -z 1923353 ']' 00:09:48.549 08:44:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.549 08:44:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:48.549 08:44:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.549 08:44:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:48.549 08:44:05 -- common/autotest_common.sh@10 -- # set +x 00:09:48.807 08:44:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:48.807 08:44:05 -- common/autotest_common.sh@850 -- # return 0 00:09:48.807 08:44:05 -- event/cpu_locks.sh@159 -- # waitforlisten 1923552 /var/tmp/spdk2.sock 00:09:48.807 08:44:05 -- common/autotest_common.sh@817 -- # '[' -z 1923552 ']' 00:09:48.807 08:44:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:48.807 08:44:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:48.807 08:44:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:48.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:48.807 08:44:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:48.807 08:44:05 -- common/autotest_common.sh@10 -- # set +x 00:09:49.066 08:44:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:49.066 08:44:06 -- common/autotest_common.sh@850 -- # return 0 00:09:49.066 08:44:06 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:49.066 08:44:06 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:49.066 08:44:06 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:49.066 08:44:06 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:49.066 00:09:49.066 real 0m2.101s 00:09:49.066 user 0m0.803s 00:09:49.066 sys 0m0.228s 00:09:49.066 08:44:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:49.066 08:44:06 -- common/autotest_common.sh@10 -- # set +x 00:09:49.066 ************************************ 00:09:49.066 END TEST locking_overlapped_coremask_via_rpc 00:09:49.066 ************************************ 00:09:49.066 08:44:06 -- event/cpu_locks.sh@174 -- # cleanup 00:09:49.066 08:44:06 -- event/cpu_locks.sh@15 -- # [[ -z 1923353 ]] 00:09:49.066 08:44:06 -- event/cpu_locks.sh@15 -- # killprocess 1923353 00:09:49.066 08:44:06 -- common/autotest_common.sh@936 -- # '[' -z 1923353 ']' 00:09:49.066 08:44:06 -- common/autotest_common.sh@940 -- # kill -0 1923353 00:09:49.066 08:44:06 -- common/autotest_common.sh@941 -- # uname 00:09:49.066 08:44:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:49.066 08:44:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1923353 00:09:49.066 08:44:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:49.066 08:44:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:49.066 08:44:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1923353' 00:09:49.066 killing process with pid 1923353 00:09:49.066 08:44:06 -- common/autotest_common.sh@955 -- # kill 1923353 00:09:49.066 08:44:06 -- common/autotest_common.sh@960 -- # wait 1923353 00:09:49.325 08:44:06 -- event/cpu_locks.sh@16 -- # [[ -z 1923552 ]] 00:09:49.325 08:44:06 -- event/cpu_locks.sh@16 -- # killprocess 1923552 00:09:49.325 08:44:06 -- common/autotest_common.sh@936 -- # '[' -z 1923552 ']' 00:09:49.325 08:44:06 -- common/autotest_common.sh@940 -- # kill -0 1923552 00:09:49.325 08:44:06 -- common/autotest_common.sh@941 -- # uname 00:09:49.325 08:44:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:49.325 08:44:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1923552 00:09:49.584 08:44:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:49.584 08:44:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:49.584 08:44:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1923552' 00:09:49.584 killing process with pid 1923552 00:09:49.584 08:44:06 -- common/autotest_common.sh@955 -- # kill 1923552 00:09:49.584 08:44:06 -- common/autotest_common.sh@960 -- # wait 1923552 00:09:49.842 08:44:06 -- event/cpu_locks.sh@18 -- # rm -f 00:09:49.842 08:44:06 -- event/cpu_locks.sh@1 -- # cleanup 00:09:49.842 08:44:06 -- event/cpu_locks.sh@15 -- # [[ -z 1923353 ]] 00:09:49.842 08:44:06 -- event/cpu_locks.sh@15 -- # killprocess 1923353 00:09:49.842 08:44:06 -- common/autotest_common.sh@936 -- # '[' -z 1923353 ']' 00:09:49.842 08:44:06 -- common/autotest_common.sh@940 -- # kill -0 1923353 00:09:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1923353) - No such process 00:09:49.842 08:44:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1923353 is not found' 00:09:49.842 Process with pid 1923353 is not found 00:09:49.842 08:44:06 -- event/cpu_locks.sh@16 -- # [[ -z 1923552 ]] 00:09:49.842 08:44:06 -- event/cpu_locks.sh@16 -- # killprocess 1923552 00:09:49.842 08:44:06 -- common/autotest_common.sh@936 -- # '[' -z 1923552 ']' 00:09:49.842 08:44:06 -- common/autotest_common.sh@940 -- # kill -0 1923552 00:09:49.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1923552) - No such process 00:09:49.842 08:44:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1923552 is not found' 00:09:49.842 Process with pid 1923552 is not found 00:09:49.842 08:44:06 -- event/cpu_locks.sh@18 -- # rm -f 00:09:49.842 00:09:49.842 real 0m19.416s 00:09:49.842 user 0m30.974s 00:09:49.842 sys 0m6.412s 00:09:49.842 08:44:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:49.842 08:44:06 -- common/autotest_common.sh@10 -- # set +x 00:09:49.842 ************************************ 00:09:49.842 END TEST cpu_locks 00:09:49.842 ************************************ 00:09:49.842 00:09:49.842 real 0m45.992s 00:09:49.842 user 1m21.199s 00:09:49.842 sys 0m10.879s 00:09:49.843 08:44:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:49.843 08:44:06 -- common/autotest_common.sh@10 -- # set +x 00:09:49.843 ************************************ 00:09:49.843 END TEST event 00:09:49.843 ************************************ 00:09:49.843 08:44:07 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:49.843 08:44:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:49.843 08:44:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.843 08:44:07 -- common/autotest_common.sh@10 -- # set +x 00:09:50.101 ************************************ 00:09:50.102 START TEST thread 00:09:50.102 ************************************ 00:09:50.102 08:44:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:50.102 * Looking for test storage... 00:09:50.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:50.102 08:44:07 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:50.102 08:44:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:50.102 08:44:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:50.102 08:44:07 -- common/autotest_common.sh@10 -- # set +x 00:09:50.361 ************************************ 00:09:50.361 START TEST thread_poller_perf 00:09:50.361 ************************************ 00:09:50.361 08:44:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:50.361 [2024-04-26 08:44:07.508282] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:50.361 [2024-04-26 08:44:07.508355] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924017 ] 00:09:50.361 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.361 [2024-04-26 08:44:07.582683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.620 [2024-04-26 08:44:07.651312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.620 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:51.557 ====================================== 00:09:51.557 busy:2509266482 (cyc) 00:09:51.557 total_run_count: 431000 00:09:51.557 tsc_hz: 2500000000 (cyc) 00:09:51.557 ====================================== 00:09:51.557 poller_cost: 5821 (cyc), 2328 (nsec) 00:09:51.557 00:09:51.557 real 0m1.255s 00:09:51.557 user 0m1.170s 00:09:51.557 sys 0m0.080s 00:09:51.557 08:44:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:51.557 08:44:08 -- common/autotest_common.sh@10 -- # set +x 00:09:51.557 ************************************ 00:09:51.557 END TEST thread_poller_perf 00:09:51.557 ************************************ 00:09:51.557 08:44:08 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:51.557 08:44:08 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:51.557 08:44:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:51.557 08:44:08 -- common/autotest_common.sh@10 -- # set +x 00:09:51.817 ************************************ 00:09:51.817 START TEST thread_poller_perf 00:09:51.817 ************************************ 00:09:51.817 08:44:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:51.817 [2024-04-26 08:44:08.964733] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:51.817 [2024-04-26 08:44:08.964812] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924306 ] 00:09:51.817 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.817 [2024-04-26 08:44:09.037161] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.076 [2024-04-26 08:44:09.105960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.076 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:53.012 ====================================== 00:09:53.012 busy:2501799454 (cyc) 00:09:53.012 total_run_count: 5582000 00:09:53.012 tsc_hz: 2500000000 (cyc) 00:09:53.012 ====================================== 00:09:53.012 poller_cost: 448 (cyc), 179 (nsec) 00:09:53.012 00:09:53.012 real 0m1.253s 00:09:53.012 user 0m1.157s 00:09:53.012 sys 0m0.091s 00:09:53.012 08:44:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:53.012 08:44:10 -- common/autotest_common.sh@10 -- # set +x 00:09:53.012 ************************************ 00:09:53.012 END TEST thread_poller_perf 00:09:53.013 ************************************ 00:09:53.013 08:44:10 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:53.013 00:09:53.013 real 0m3.029s 00:09:53.013 user 0m2.515s 00:09:53.013 sys 0m0.473s 00:09:53.013 08:44:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:53.013 08:44:10 -- common/autotest_common.sh@10 -- # set +x 00:09:53.013 ************************************ 00:09:53.013 END TEST thread 00:09:53.013 ************************************ 00:09:53.272 08:44:10 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:09:53.272 08:44:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:53.272 08:44:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.272 08:44:10 -- common/autotest_common.sh@10 -- # set +x 00:09:53.272 ************************************ 00:09:53.272 START TEST accel 00:09:53.272 ************************************ 00:09:53.272 08:44:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:09:53.532 * Looking for test storage... 00:09:53.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:09:53.532 08:44:10 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:09:53.532 08:44:10 -- accel/accel.sh@82 -- # get_expected_opcs 00:09:53.532 08:44:10 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:53.532 08:44:10 -- accel/accel.sh@62 -- # spdk_tgt_pid=1924639 00:09:53.532 08:44:10 -- accel/accel.sh@63 -- # waitforlisten 1924639 00:09:53.532 08:44:10 -- common/autotest_common.sh@817 -- # '[' -z 1924639 ']' 00:09:53.532 08:44:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.532 08:44:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:53.532 08:44:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.532 08:44:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:53.532 08:44:10 -- common/autotest_common.sh@10 -- # set +x 00:09:53.532 08:44:10 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:53.532 08:44:10 -- accel/accel.sh@61 -- # build_accel_config 00:09:53.532 08:44:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:53.532 08:44:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:53.532 08:44:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:53.532 08:44:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:53.532 08:44:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:53.532 08:44:10 -- accel/accel.sh@40 -- # local IFS=, 00:09:53.532 08:44:10 -- accel/accel.sh@41 -- # jq -r . 00:09:53.532 [2024-04-26 08:44:10.606462] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:53.532 [2024-04-26 08:44:10.606514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924639 ] 00:09:53.532 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.532 [2024-04-26 08:44:10.677225] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.532 [2024-04-26 08:44:10.748972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.469 08:44:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:54.469 08:44:11 -- common/autotest_common.sh@850 -- # return 0 00:09:54.469 08:44:11 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:54.469 08:44:11 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:54.469 08:44:11 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:54.469 08:44:11 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:09:54.469 08:44:11 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:54.469 08:44:11 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:54.469 08:44:11 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:54.469 08:44:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.469 08:44:11 -- common/autotest_common.sh@10 -- # set +x 00:09:54.469 08:44:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.469 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.469 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.469 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.469 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.469 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.469 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.469 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.469 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.469 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.469 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.469 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.469 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.469 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.469 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.469 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.469 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.469 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.469 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.470 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.470 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.470 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.470 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.470 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.470 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.470 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.470 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.470 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.470 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.470 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.470 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.470 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.470 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.470 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.470 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.470 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.470 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.470 08:44:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:54.470 08:44:11 -- accel/accel.sh@72 -- # IFS== 00:09:54.470 08:44:11 -- accel/accel.sh@72 -- # read -r opc module 00:09:54.470 08:44:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:54.470 08:44:11 -- accel/accel.sh@75 -- # killprocess 1924639 00:09:54.470 08:44:11 -- common/autotest_common.sh@936 -- # '[' -z 1924639 ']' 00:09:54.470 08:44:11 -- common/autotest_common.sh@940 -- # kill -0 1924639 00:09:54.470 08:44:11 -- common/autotest_common.sh@941 -- # uname 00:09:54.470 08:44:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:54.470 08:44:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1924639 00:09:54.470 08:44:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:54.470 08:44:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:54.470 08:44:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1924639' 00:09:54.470 killing process with pid 1924639 00:09:54.470 08:44:11 -- common/autotest_common.sh@955 -- # kill 1924639 00:09:54.470 08:44:11 -- common/autotest_common.sh@960 -- # wait 1924639 00:09:54.732 08:44:11 -- accel/accel.sh@76 -- # trap - ERR 00:09:54.732 08:44:11 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:09:54.732 08:44:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:54.732 08:44:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.732 08:44:11 -- common/autotest_common.sh@10 -- # set +x 00:09:54.732 08:44:11 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:09:54.732 08:44:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:54.732 08:44:11 -- accel/accel.sh@12 -- # build_accel_config 00:09:54.732 08:44:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:54.732 08:44:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:54.732 08:44:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.732 08:44:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.732 08:44:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:54.732 08:44:11 -- accel/accel.sh@40 -- # local IFS=, 00:09:54.732 08:44:11 -- accel/accel.sh@41 -- # jq -r . 00:09:54.732 08:44:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:54.732 08:44:11 -- common/autotest_common.sh@10 -- # set +x 00:09:54.992 08:44:12 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:54.992 08:44:12 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:54.992 08:44:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.992 08:44:12 -- common/autotest_common.sh@10 -- # set +x 00:09:54.992 ************************************ 00:09:54.992 START TEST accel_missing_filename 00:09:54.992 ************************************ 00:09:54.992 08:44:12 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:09:54.992 08:44:12 -- common/autotest_common.sh@638 -- # local es=0 00:09:54.992 08:44:12 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:54.992 08:44:12 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:09:54.992 08:44:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.992 08:44:12 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:09:54.992 08:44:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:54.992 08:44:12 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:09:54.992 08:44:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:54.992 08:44:12 -- accel/accel.sh@12 -- # build_accel_config 00:09:54.992 08:44:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:54.992 08:44:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:54.992 08:44:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.992 08:44:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.992 08:44:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:54.992 08:44:12 -- accel/accel.sh@40 -- # local IFS=, 00:09:54.992 08:44:12 -- accel/accel.sh@41 -- # jq -r . 00:09:54.992 [2024-04-26 08:44:12.192041] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:54.992 [2024-04-26 08:44:12.192123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1924965 ] 00:09:54.992 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.251 [2024-04-26 08:44:12.266982] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.251 [2024-04-26 08:44:12.337534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.251 [2024-04-26 08:44:12.378848] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:55.251 [2024-04-26 08:44:12.439260] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:09:55.510 A filename is required. 00:09:55.510 08:44:12 -- common/autotest_common.sh@641 -- # es=234 00:09:55.510 08:44:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:55.510 08:44:12 -- common/autotest_common.sh@650 -- # es=106 00:09:55.510 08:44:12 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:55.510 08:44:12 -- common/autotest_common.sh@658 -- # es=1 00:09:55.510 08:44:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:55.510 00:09:55.510 real 0m0.365s 00:09:55.510 user 0m0.268s 00:09:55.510 sys 0m0.134s 00:09:55.510 08:44:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:55.510 08:44:12 -- common/autotest_common.sh@10 -- # set +x 00:09:55.510 ************************************ 00:09:55.510 END TEST accel_missing_filename 00:09:55.510 ************************************ 00:09:55.510 08:44:12 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:55.510 08:44:12 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:55.510 08:44:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.510 08:44:12 -- common/autotest_common.sh@10 -- # set +x 00:09:55.510 ************************************ 00:09:55.510 START TEST accel_compress_verify 00:09:55.510 ************************************ 00:09:55.510 08:44:12 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:55.510 08:44:12 -- common/autotest_common.sh@638 -- # local es=0 00:09:55.510 08:44:12 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:55.510 08:44:12 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:09:55.510 08:44:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:55.510 08:44:12 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:09:55.510 08:44:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:55.510 08:44:12 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:55.510 08:44:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:09:55.510 08:44:12 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.510 08:44:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:55.510 08:44:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:55.510 08:44:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.510 08:44:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.510 08:44:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:55.510 08:44:12 -- accel/accel.sh@40 -- # local IFS=, 00:09:55.510 08:44:12 -- accel/accel.sh@41 -- # jq -r . 00:09:55.510 [2024-04-26 08:44:12.748139] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:55.510 [2024-04-26 08:44:12.748205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925240 ] 00:09:55.769 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.769 [2024-04-26 08:44:12.820060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.769 [2024-04-26 08:44:12.886260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.769 [2024-04-26 08:44:12.927293] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:55.769 [2024-04-26 08:44:12.987055] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:09:56.028 00:09:56.028 Compression does not support the verify option, aborting. 00:09:56.028 08:44:13 -- common/autotest_common.sh@641 -- # es=161 00:09:56.028 08:44:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:56.028 08:44:13 -- common/autotest_common.sh@650 -- # es=33 00:09:56.028 08:44:13 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:56.028 08:44:13 -- common/autotest_common.sh@658 -- # es=1 00:09:56.028 08:44:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:56.028 00:09:56.028 real 0m0.356s 00:09:56.028 user 0m0.270s 00:09:56.028 sys 0m0.124s 00:09:56.028 08:44:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:56.028 08:44:13 -- common/autotest_common.sh@10 -- # set +x 00:09:56.028 ************************************ 00:09:56.028 END TEST accel_compress_verify 00:09:56.028 ************************************ 00:09:56.028 08:44:13 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:56.028 08:44:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:56.028 08:44:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.028 08:44:13 -- common/autotest_common.sh@10 -- # set +x 00:09:56.028 ************************************ 00:09:56.028 START TEST accel_wrong_workload 00:09:56.028 ************************************ 00:09:56.028 08:44:13 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:09:56.028 08:44:13 -- common/autotest_common.sh@638 -- # local es=0 00:09:56.028 08:44:13 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:56.028 08:44:13 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:09:56.028 08:44:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:56.028 08:44:13 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:09:56.028 08:44:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:56.028 08:44:13 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:09:56.028 08:44:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:56.028 08:44:13 -- accel/accel.sh@12 -- # build_accel_config 00:09:56.028 08:44:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:56.028 08:44:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:56.028 08:44:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.028 08:44:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.028 08:44:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:56.028 08:44:13 -- accel/accel.sh@40 -- # local IFS=, 00:09:56.028 08:44:13 -- accel/accel.sh@41 -- # jq -r . 00:09:56.287 Unsupported workload type: foobar 00:09:56.287 [2024-04-26 08:44:13.283005] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:56.287 accel_perf options: 00:09:56.287 [-h help message] 00:09:56.287 [-q queue depth per core] 00:09:56.287 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:56.287 [-T number of threads per core 00:09:56.287 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:56.287 [-t time in seconds] 00:09:56.287 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:56.287 [ dif_verify, , dif_generate, dif_generate_copy 00:09:56.287 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:56.287 [-l for compress/decompress workloads, name of uncompressed input file 00:09:56.287 [-S for crc32c workload, use this seed value (default 0) 00:09:56.287 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:56.287 [-f for fill workload, use this BYTE value (default 255) 00:09:56.287 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:56.287 [-y verify result if this switch is on] 00:09:56.287 [-a tasks to allocate per core (default: same value as -q)] 00:09:56.287 Can be used to spread operations across a wider range of memory. 00:09:56.287 08:44:13 -- common/autotest_common.sh@641 -- # es=1 00:09:56.287 08:44:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:56.287 08:44:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:56.287 08:44:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:56.287 00:09:56.287 real 0m0.035s 00:09:56.287 user 0m0.019s 00:09:56.287 sys 0m0.016s 00:09:56.287 08:44:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:56.287 08:44:13 -- common/autotest_common.sh@10 -- # set +x 00:09:56.287 ************************************ 00:09:56.287 END TEST accel_wrong_workload 00:09:56.287 ************************************ 00:09:56.287 Error: writing output failed: Broken pipe 00:09:56.287 08:44:13 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:56.287 08:44:13 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:56.287 08:44:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.287 08:44:13 -- common/autotest_common.sh@10 -- # set +x 00:09:56.287 ************************************ 00:09:56.287 START TEST accel_negative_buffers 00:09:56.287 ************************************ 00:09:56.287 08:44:13 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:56.287 08:44:13 -- common/autotest_common.sh@638 -- # local es=0 00:09:56.287 08:44:13 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:56.287 08:44:13 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:09:56.287 08:44:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:56.287 08:44:13 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:09:56.287 08:44:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:56.287 08:44:13 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:09:56.287 08:44:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:56.287 08:44:13 -- accel/accel.sh@12 -- # build_accel_config 00:09:56.287 08:44:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:56.287 08:44:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:56.287 08:44:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.287 08:44:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.287 08:44:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:56.287 08:44:13 -- accel/accel.sh@40 -- # local IFS=, 00:09:56.287 08:44:13 -- accel/accel.sh@41 -- # jq -r . 00:09:56.287 -x option must be non-negative. 00:09:56.287 [2024-04-26 08:44:13.532368] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:56.545 accel_perf options: 00:09:56.545 [-h help message] 00:09:56.545 [-q queue depth per core] 00:09:56.545 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:56.545 [-T number of threads per core 00:09:56.545 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:56.545 [-t time in seconds] 00:09:56.545 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:56.545 [ dif_verify, , dif_generate, dif_generate_copy 00:09:56.545 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:56.545 [-l for compress/decompress workloads, name of uncompressed input file 00:09:56.545 [-S for crc32c workload, use this seed value (default 0) 00:09:56.545 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:56.545 [-f for fill workload, use this BYTE value (default 255) 00:09:56.545 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:56.545 [-y verify result if this switch is on] 00:09:56.545 [-a tasks to allocate per core (default: same value as -q)] 00:09:56.546 Can be used to spread operations across a wider range of memory. 00:09:56.546 08:44:13 -- common/autotest_common.sh@641 -- # es=1 00:09:56.546 08:44:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:56.546 08:44:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:56.546 08:44:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:56.546 00:09:56.546 real 0m0.036s 00:09:56.546 user 0m0.020s 00:09:56.546 sys 0m0.016s 00:09:56.546 08:44:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:56.546 08:44:13 -- common/autotest_common.sh@10 -- # set +x 00:09:56.546 ************************************ 00:09:56.546 END TEST accel_negative_buffers 00:09:56.546 ************************************ 00:09:56.546 Error: writing output failed: Broken pipe 00:09:56.546 08:44:13 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:56.546 08:44:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:56.546 08:44:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:56.546 08:44:13 -- common/autotest_common.sh@10 -- # set +x 00:09:56.546 ************************************ 00:09:56.546 START TEST accel_crc32c 00:09:56.546 ************************************ 00:09:56.546 08:44:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:56.546 08:44:13 -- accel/accel.sh@16 -- # local accel_opc 00:09:56.546 08:44:13 -- accel/accel.sh@17 -- # local accel_module 00:09:56.546 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.546 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.546 08:44:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:56.546 08:44:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:56.546 08:44:13 -- accel/accel.sh@12 -- # build_accel_config 00:09:56.546 08:44:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:56.546 08:44:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:56.546 08:44:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.546 08:44:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.546 08:44:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:56.546 08:44:13 -- accel/accel.sh@40 -- # local IFS=, 00:09:56.546 08:44:13 -- accel/accel.sh@41 -- # jq -r . 00:09:56.546 [2024-04-26 08:44:13.747878] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:56.546 [2024-04-26 08:44:13.747942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925336 ] 00:09:56.546 EAL: No free 2048 kB hugepages reported on node 1 00:09:56.805 [2024-04-26 08:44:13.821385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.805 [2024-04-26 08:44:13.892253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val= 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val= 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val=0x1 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val= 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val= 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val=crc32c 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val=32 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val= 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val=software 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@22 -- # accel_module=software 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val=32 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val=32 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val=1 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val=Yes 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val= 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:56.805 08:44:13 -- accel/accel.sh@20 -- # val= 00:09:56.805 08:44:13 -- accel/accel.sh@21 -- # case "$var" in 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # IFS=: 00:09:56.805 08:44:13 -- accel/accel.sh@19 -- # read -r var val 00:09:58.182 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.182 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.182 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.182 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.182 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.182 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.182 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.182 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.182 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.182 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.182 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.182 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.182 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.182 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.182 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.182 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.183 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.183 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.183 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.183 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.183 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.183 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.183 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.183 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.183 08:44:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:58.183 08:44:15 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:58.183 08:44:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:58.183 00:09:58.183 real 0m1.364s 00:09:58.183 user 0m1.239s 00:09:58.183 sys 0m0.129s 00:09:58.183 08:44:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:58.183 08:44:15 -- common/autotest_common.sh@10 -- # set +x 00:09:58.183 ************************************ 00:09:58.183 END TEST accel_crc32c 00:09:58.183 ************************************ 00:09:58.183 08:44:15 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:58.183 08:44:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:58.183 08:44:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:58.183 08:44:15 -- common/autotest_common.sh@10 -- # set +x 00:09:58.183 ************************************ 00:09:58.183 START TEST accel_crc32c_C2 00:09:58.183 ************************************ 00:09:58.183 08:44:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:58.183 08:44:15 -- accel/accel.sh@16 -- # local accel_opc 00:09:58.183 08:44:15 -- accel/accel.sh@17 -- # local accel_module 00:09:58.183 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.183 08:44:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:58.183 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.183 08:44:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:58.183 08:44:15 -- accel/accel.sh@12 -- # build_accel_config 00:09:58.183 08:44:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:58.183 08:44:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:58.183 08:44:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.183 08:44:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.183 08:44:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:58.183 08:44:15 -- accel/accel.sh@40 -- # local IFS=, 00:09:58.183 08:44:15 -- accel/accel.sh@41 -- # jq -r . 00:09:58.183 [2024-04-26 08:44:15.313406] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:58.183 [2024-04-26 08:44:15.313468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925630 ] 00:09:58.183 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.183 [2024-04-26 08:44:15.384922] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.443 [2024-04-26 08:44:15.457556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val=0x1 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val=crc32c 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val=0 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val=software 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@22 -- # accel_module=software 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val=32 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val=32 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val=1 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val=Yes 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:58.443 08:44:15 -- accel/accel.sh@20 -- # val= 00:09:58.443 08:44:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # IFS=: 00:09:58.443 08:44:15 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:16 -- accel/accel.sh@20 -- # val= 00:09:59.820 08:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:16 -- accel/accel.sh@20 -- # val= 00:09:59.820 08:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:16 -- accel/accel.sh@20 -- # val= 00:09:59.820 08:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:16 -- accel/accel.sh@20 -- # val= 00:09:59.820 08:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:16 -- accel/accel.sh@20 -- # val= 00:09:59.820 08:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:16 -- accel/accel.sh@20 -- # val= 00:09:59.820 08:44:16 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:59.820 08:44:16 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:59.820 08:44:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:59.820 00:09:59.820 real 0m1.364s 00:09:59.820 user 0m1.241s 00:09:59.820 sys 0m0.128s 00:09:59.820 08:44:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:59.820 08:44:16 -- common/autotest_common.sh@10 -- # set +x 00:09:59.820 ************************************ 00:09:59.820 END TEST accel_crc32c_C2 00:09:59.820 ************************************ 00:09:59.820 08:44:16 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:59.820 08:44:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:59.820 08:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.820 08:44:16 -- common/autotest_common.sh@10 -- # set +x 00:09:59.820 ************************************ 00:09:59.820 START TEST accel_copy 00:09:59.820 ************************************ 00:09:59.820 08:44:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:09:59.820 08:44:16 -- accel/accel.sh@16 -- # local accel_opc 00:09:59.820 08:44:16 -- accel/accel.sh@17 -- # local accel_module 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:16 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:59.820 08:44:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:59.820 08:44:16 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.820 08:44:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:59.820 08:44:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:59.820 08:44:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.820 08:44:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.820 08:44:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:59.820 08:44:16 -- accel/accel.sh@40 -- # local IFS=, 00:09:59.820 08:44:16 -- accel/accel.sh@41 -- # jq -r . 00:09:59.820 [2024-04-26 08:44:16.860295] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:09:59.820 [2024-04-26 08:44:16.860354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925931 ] 00:09:59.820 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.820 [2024-04-26 08:44:16.931473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.820 [2024-04-26 08:44:17.000894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val= 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val= 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val=0x1 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val= 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val= 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val=copy 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@23 -- # accel_opc=copy 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val= 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val=software 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@22 -- # accel_module=software 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val=32 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val=32 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val=1 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val=Yes 00:09:59.820 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.820 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.820 08:44:17 -- accel/accel.sh@20 -- # val= 00:09:59.821 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.821 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.821 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:09:59.821 08:44:17 -- accel/accel.sh@20 -- # val= 00:09:59.821 08:44:17 -- accel/accel.sh@21 -- # case "$var" in 00:09:59.821 08:44:17 -- accel/accel.sh@19 -- # IFS=: 00:09:59.821 08:44:17 -- accel/accel.sh@19 -- # read -r var val 00:10:01.197 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.197 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.197 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.197 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.197 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.197 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.197 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.197 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.197 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.197 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.197 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.197 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.197 08:44:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:01.197 08:44:18 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:10:01.197 08:44:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:01.197 00:10:01.197 real 0m1.363s 00:10:01.197 user 0m1.226s 00:10:01.197 sys 0m0.140s 00:10:01.197 08:44:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:01.197 08:44:18 -- common/autotest_common.sh@10 -- # set +x 00:10:01.197 ************************************ 00:10:01.197 END TEST accel_copy 00:10:01.197 ************************************ 00:10:01.197 08:44:18 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:01.197 08:44:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:01.197 08:44:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:01.197 08:44:18 -- common/autotest_common.sh@10 -- # set +x 00:10:01.197 ************************************ 00:10:01.197 START TEST accel_fill 00:10:01.197 ************************************ 00:10:01.197 08:44:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:01.197 08:44:18 -- accel/accel.sh@16 -- # local accel_opc 00:10:01.197 08:44:18 -- accel/accel.sh@17 -- # local accel_module 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.197 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.197 08:44:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:01.197 08:44:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:01.197 08:44:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.197 08:44:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:01.197 08:44:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:01.197 08:44:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.197 08:44:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.197 08:44:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:01.197 08:44:18 -- accel/accel.sh@40 -- # local IFS=, 00:10:01.197 08:44:18 -- accel/accel.sh@41 -- # jq -r . 00:10:01.197 [2024-04-26 08:44:18.399861] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:01.197 [2024-04-26 08:44:18.399931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926217 ] 00:10:01.197 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.456 [2024-04-26 08:44:18.474676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.456 [2024-04-26 08:44:18.544135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val=0x1 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val=fill 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@23 -- # accel_opc=fill 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val=0x80 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val=software 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@22 -- # accel_module=software 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val=64 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val=64 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val=1 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val=Yes 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:01.456 08:44:18 -- accel/accel.sh@20 -- # val= 00:10:01.456 08:44:18 -- accel/accel.sh@21 -- # case "$var" in 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # IFS=: 00:10:01.456 08:44:18 -- accel/accel.sh@19 -- # read -r var val 00:10:02.834 08:44:19 -- accel/accel.sh@20 -- # val= 00:10:02.834 08:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # IFS=: 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # read -r var val 00:10:02.834 08:44:19 -- accel/accel.sh@20 -- # val= 00:10:02.834 08:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # IFS=: 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # read -r var val 00:10:02.834 08:44:19 -- accel/accel.sh@20 -- # val= 00:10:02.834 08:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # IFS=: 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # read -r var val 00:10:02.834 08:44:19 -- accel/accel.sh@20 -- # val= 00:10:02.834 08:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # IFS=: 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # read -r var val 00:10:02.834 08:44:19 -- accel/accel.sh@20 -- # val= 00:10:02.834 08:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # IFS=: 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # read -r var val 00:10:02.834 08:44:19 -- accel/accel.sh@20 -- # val= 00:10:02.834 08:44:19 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # IFS=: 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # read -r var val 00:10:02.834 08:44:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:02.834 08:44:19 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:10:02.834 08:44:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:02.834 00:10:02.834 real 0m1.367s 00:10:02.834 user 0m1.240s 00:10:02.834 sys 0m0.131s 00:10:02.834 08:44:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:02.834 08:44:19 -- common/autotest_common.sh@10 -- # set +x 00:10:02.834 ************************************ 00:10:02.834 END TEST accel_fill 00:10:02.834 ************************************ 00:10:02.834 08:44:19 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:02.834 08:44:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:02.834 08:44:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:02.834 08:44:19 -- common/autotest_common.sh@10 -- # set +x 00:10:02.834 ************************************ 00:10:02.834 START TEST accel_copy_crc32c 00:10:02.834 ************************************ 00:10:02.834 08:44:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:10:02.834 08:44:19 -- accel/accel.sh@16 -- # local accel_opc 00:10:02.834 08:44:19 -- accel/accel.sh@17 -- # local accel_module 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # IFS=: 00:10:02.834 08:44:19 -- accel/accel.sh@19 -- # read -r var val 00:10:02.834 08:44:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:02.834 08:44:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:02.834 08:44:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:02.834 08:44:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:02.834 08:44:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:02.834 08:44:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.834 08:44:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.834 08:44:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:02.834 08:44:19 -- accel/accel.sh@40 -- # local IFS=, 00:10:02.834 08:44:19 -- accel/accel.sh@41 -- # jq -r . 00:10:02.834 [2024-04-26 08:44:19.945514] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:02.834 [2024-04-26 08:44:19.945578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926502 ] 00:10:02.834 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.834 [2024-04-26 08:44:20.020965] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.093 [2024-04-26 08:44:20.106361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val= 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val= 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val=0x1 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val= 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val= 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val=0 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val= 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val=software 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@22 -- # accel_module=software 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val=32 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val=32 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val=1 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val=Yes 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val= 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:03.093 08:44:20 -- accel/accel.sh@20 -- # val= 00:10:03.093 08:44:20 -- accel/accel.sh@21 -- # case "$var" in 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # IFS=: 00:10:03.093 08:44:20 -- accel/accel.sh@19 -- # read -r var val 00:10:04.467 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.467 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.467 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.467 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.467 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.467 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.467 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.467 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.467 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.467 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.467 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.467 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.467 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:04.468 08:44:21 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:04.468 08:44:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:04.468 00:10:04.468 real 0m1.384s 00:10:04.468 user 0m1.247s 00:10:04.468 sys 0m0.141s 00:10:04.468 08:44:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:04.468 08:44:21 -- common/autotest_common.sh@10 -- # set +x 00:10:04.468 ************************************ 00:10:04.468 END TEST accel_copy_crc32c 00:10:04.468 ************************************ 00:10:04.468 08:44:21 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:04.468 08:44:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:04.468 08:44:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:04.468 08:44:21 -- common/autotest_common.sh@10 -- # set +x 00:10:04.468 ************************************ 00:10:04.468 START TEST accel_copy_crc32c_C2 00:10:04.468 ************************************ 00:10:04.468 08:44:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:04.468 08:44:21 -- accel/accel.sh@16 -- # local accel_opc 00:10:04.468 08:44:21 -- accel/accel.sh@17 -- # local accel_module 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:04.468 08:44:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:04.468 08:44:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:04.468 08:44:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:04.468 08:44:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:04.468 08:44:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.468 08:44:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.468 08:44:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:04.468 08:44:21 -- accel/accel.sh@40 -- # local IFS=, 00:10:04.468 08:44:21 -- accel/accel.sh@41 -- # jq -r . 00:10:04.468 [2024-04-26 08:44:21.505079] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:04.468 [2024-04-26 08:44:21.505143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926801 ] 00:10:04.468 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.468 [2024-04-26 08:44:21.578207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.468 [2024-04-26 08:44:21.648356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val=0x1 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val=0 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val='8192 bytes' 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val=software 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@22 -- # accel_module=software 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val=32 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val=32 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val=1 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val=Yes 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:04.468 08:44:21 -- accel/accel.sh@20 -- # val= 00:10:04.468 08:44:21 -- accel/accel.sh@21 -- # case "$var" in 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # IFS=: 00:10:04.468 08:44:21 -- accel/accel.sh@19 -- # read -r var val 00:10:05.844 08:44:22 -- accel/accel.sh@20 -- # val= 00:10:05.844 08:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # IFS=: 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # read -r var val 00:10:05.844 08:44:22 -- accel/accel.sh@20 -- # val= 00:10:05.844 08:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # IFS=: 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # read -r var val 00:10:05.844 08:44:22 -- accel/accel.sh@20 -- # val= 00:10:05.844 08:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # IFS=: 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # read -r var val 00:10:05.844 08:44:22 -- accel/accel.sh@20 -- # val= 00:10:05.844 08:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # IFS=: 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # read -r var val 00:10:05.844 08:44:22 -- accel/accel.sh@20 -- # val= 00:10:05.844 08:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # IFS=: 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # read -r var val 00:10:05.844 08:44:22 -- accel/accel.sh@20 -- # val= 00:10:05.844 08:44:22 -- accel/accel.sh@21 -- # case "$var" in 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # IFS=: 00:10:05.844 08:44:22 -- accel/accel.sh@19 -- # read -r var val 00:10:05.844 08:44:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:05.844 08:44:22 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:05.844 08:44:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:05.844 00:10:05.844 real 0m1.364s 00:10:05.844 user 0m1.224s 00:10:05.844 sys 0m0.144s 00:10:05.844 08:44:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:05.844 08:44:22 -- common/autotest_common.sh@10 -- # set +x 00:10:05.844 ************************************ 00:10:05.844 END TEST accel_copy_crc32c_C2 00:10:05.844 ************************************ 00:10:05.844 08:44:22 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:05.844 08:44:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:05.844 08:44:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.844 08:44:22 -- common/autotest_common.sh@10 -- # set +x 00:10:05.844 ************************************ 00:10:05.844 START TEST accel_dualcast 00:10:05.844 ************************************ 00:10:05.844 08:44:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:10:05.844 08:44:23 -- accel/accel.sh@16 -- # local accel_opc 00:10:05.844 08:44:23 -- accel/accel.sh@17 -- # local accel_module 00:10:05.844 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:05.844 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:05.844 08:44:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:05.844 08:44:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:05.844 08:44:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:05.844 08:44:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:05.844 08:44:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:05.844 08:44:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:05.844 08:44:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:05.844 08:44:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:05.844 08:44:23 -- accel/accel.sh@40 -- # local IFS=, 00:10:05.844 08:44:23 -- accel/accel.sh@41 -- # jq -r . 00:10:05.844 [2024-04-26 08:44:23.046373] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:05.844 [2024-04-26 08:44:23.046461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1927088 ] 00:10:05.844 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.103 [2024-04-26 08:44:23.120842] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.103 [2024-04-26 08:44:23.194742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val= 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val= 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val=0x1 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val= 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val= 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val=dualcast 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val= 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val=software 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@22 -- # accel_module=software 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val=32 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val=32 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val=1 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val=Yes 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val= 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:06.103 08:44:23 -- accel/accel.sh@20 -- # val= 00:10:06.103 08:44:23 -- accel/accel.sh@21 -- # case "$var" in 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # IFS=: 00:10:06.103 08:44:23 -- accel/accel.sh@19 -- # read -r var val 00:10:07.488 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.488 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.488 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.488 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.488 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.488 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.488 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.488 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.488 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.488 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.488 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.488 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.488 08:44:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:07.488 08:44:24 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:10:07.488 08:44:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:07.488 00:10:07.488 real 0m1.374s 00:10:07.488 user 0m1.246s 00:10:07.488 sys 0m0.133s 00:10:07.488 08:44:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:07.488 08:44:24 -- common/autotest_common.sh@10 -- # set +x 00:10:07.488 ************************************ 00:10:07.488 END TEST accel_dualcast 00:10:07.488 ************************************ 00:10:07.488 08:44:24 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:07.488 08:44:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:07.488 08:44:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:07.488 08:44:24 -- common/autotest_common.sh@10 -- # set +x 00:10:07.488 ************************************ 00:10:07.488 START TEST accel_compare 00:10:07.488 ************************************ 00:10:07.488 08:44:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:10:07.488 08:44:24 -- accel/accel.sh@16 -- # local accel_opc 00:10:07.488 08:44:24 -- accel/accel.sh@17 -- # local accel_module 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.488 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.488 08:44:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:07.488 08:44:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:07.488 08:44:24 -- accel/accel.sh@12 -- # build_accel_config 00:10:07.488 08:44:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:07.488 08:44:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:07.488 08:44:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:07.488 08:44:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:07.488 08:44:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:07.488 08:44:24 -- accel/accel.sh@40 -- # local IFS=, 00:10:07.489 08:44:24 -- accel/accel.sh@41 -- # jq -r . 00:10:07.489 [2024-04-26 08:44:24.630819] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:07.489 [2024-04-26 08:44:24.630891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1927391 ] 00:10:07.489 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.489 [2024-04-26 08:44:24.704074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.748 [2024-04-26 08:44:24.772000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val=0x1 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val=compare 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@23 -- # accel_opc=compare 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val=software 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@22 -- # accel_module=software 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val=32 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val=32 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val=1 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val=Yes 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.748 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.748 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.748 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.749 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:07.749 08:44:24 -- accel/accel.sh@20 -- # val= 00:10:07.749 08:44:24 -- accel/accel.sh@21 -- # case "$var" in 00:10:07.749 08:44:24 -- accel/accel.sh@19 -- # IFS=: 00:10:07.749 08:44:24 -- accel/accel.sh@19 -- # read -r var val 00:10:09.125 08:44:25 -- accel/accel.sh@20 -- # val= 00:10:09.125 08:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # IFS=: 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # read -r var val 00:10:09.125 08:44:25 -- accel/accel.sh@20 -- # val= 00:10:09.125 08:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # IFS=: 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # read -r var val 00:10:09.125 08:44:25 -- accel/accel.sh@20 -- # val= 00:10:09.125 08:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # IFS=: 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # read -r var val 00:10:09.125 08:44:25 -- accel/accel.sh@20 -- # val= 00:10:09.125 08:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # IFS=: 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # read -r var val 00:10:09.125 08:44:25 -- accel/accel.sh@20 -- # val= 00:10:09.125 08:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # IFS=: 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # read -r var val 00:10:09.125 08:44:25 -- accel/accel.sh@20 -- # val= 00:10:09.125 08:44:25 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # IFS=: 00:10:09.125 08:44:25 -- accel/accel.sh@19 -- # read -r var val 00:10:09.125 08:44:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:09.126 08:44:25 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:10:09.126 08:44:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:09.126 00:10:09.126 real 0m1.364s 00:10:09.126 user 0m1.238s 00:10:09.126 sys 0m0.131s 00:10:09.126 08:44:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:09.126 08:44:25 -- common/autotest_common.sh@10 -- # set +x 00:10:09.126 ************************************ 00:10:09.126 END TEST accel_compare 00:10:09.126 ************************************ 00:10:09.126 08:44:26 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:09.126 08:44:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:09.126 08:44:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:09.126 08:44:26 -- common/autotest_common.sh@10 -- # set +x 00:10:09.126 ************************************ 00:10:09.126 START TEST accel_xor 00:10:09.126 ************************************ 00:10:09.126 08:44:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:10:09.126 08:44:26 -- accel/accel.sh@16 -- # local accel_opc 00:10:09.126 08:44:26 -- accel/accel.sh@17 -- # local accel_module 00:10:09.126 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.126 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.126 08:44:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:09.126 08:44:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:09.126 08:44:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:09.126 08:44:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:09.126 08:44:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:09.126 08:44:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.126 08:44:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.126 08:44:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:09.126 08:44:26 -- accel/accel.sh@40 -- # local IFS=, 00:10:09.126 08:44:26 -- accel/accel.sh@41 -- # jq -r . 00:10:09.126 [2024-04-26 08:44:26.208709] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:09.126 [2024-04-26 08:44:26.208784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1927696 ] 00:10:09.126 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.126 [2024-04-26 08:44:26.282598] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.126 [2024-04-26 08:44:26.354276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.384 08:44:26 -- accel/accel.sh@20 -- # val= 00:10:09.384 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.384 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.384 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.384 08:44:26 -- accel/accel.sh@20 -- # val= 00:10:09.384 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.384 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.384 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.384 08:44:26 -- accel/accel.sh@20 -- # val=0x1 00:10:09.384 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.384 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.384 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.384 08:44:26 -- accel/accel.sh@20 -- # val= 00:10:09.384 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val= 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val=xor 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@23 -- # accel_opc=xor 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val=2 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val= 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val=software 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@22 -- # accel_module=software 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val=32 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val=32 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val=1 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val=Yes 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val= 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:09.385 08:44:26 -- accel/accel.sh@20 -- # val= 00:10:09.385 08:44:26 -- accel/accel.sh@21 -- # case "$var" in 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # IFS=: 00:10:09.385 08:44:26 -- accel/accel.sh@19 -- # read -r var val 00:10:10.323 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.323 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.323 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.323 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.323 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.323 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.323 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.323 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.323 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.323 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.323 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.323 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.323 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.323 08:44:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:10.323 08:44:27 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:10.323 08:44:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:10.323 00:10:10.323 real 0m1.369s 00:10:10.323 user 0m1.240s 00:10:10.323 sys 0m0.133s 00:10:10.323 08:44:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:10.323 08:44:27 -- common/autotest_common.sh@10 -- # set +x 00:10:10.323 ************************************ 00:10:10.323 END TEST accel_xor 00:10:10.323 ************************************ 00:10:10.582 08:44:27 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:10.582 08:44:27 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:10.582 08:44:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:10.582 08:44:27 -- common/autotest_common.sh@10 -- # set +x 00:10:10.582 ************************************ 00:10:10.582 START TEST accel_xor 00:10:10.582 ************************************ 00:10:10.582 08:44:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:10:10.582 08:44:27 -- accel/accel.sh@16 -- # local accel_opc 00:10:10.582 08:44:27 -- accel/accel.sh@17 -- # local accel_module 00:10:10.582 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.582 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.582 08:44:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:10.582 08:44:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:10.582 08:44:27 -- accel/accel.sh@12 -- # build_accel_config 00:10:10.582 08:44:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:10.582 08:44:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:10.582 08:44:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.582 08:44:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.582 08:44:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:10.582 08:44:27 -- accel/accel.sh@40 -- # local IFS=, 00:10:10.582 08:44:27 -- accel/accel.sh@41 -- # jq -r . 00:10:10.582 [2024-04-26 08:44:27.764399] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:10.582 [2024-04-26 08:44:27.764548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1927993 ] 00:10:10.582 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.841 [2024-04-26 08:44:27.837363] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.841 [2024-04-26 08:44:27.908820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val=0x1 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val=xor 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@23 -- # accel_opc=xor 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val=3 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val=software 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@22 -- # accel_module=software 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val=32 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val=32 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val=1 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val=Yes 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:10.841 08:44:27 -- accel/accel.sh@20 -- # val= 00:10:10.841 08:44:27 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # IFS=: 00:10:10.841 08:44:27 -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.220 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.220 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.220 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.220 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.220 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.220 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:44:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:12.220 08:44:29 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:12.220 08:44:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:12.220 00:10:12.220 real 0m1.366s 00:10:12.220 user 0m1.231s 00:10:12.220 sys 0m0.138s 00:10:12.220 08:44:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:12.220 08:44:29 -- common/autotest_common.sh@10 -- # set +x 00:10:12.220 ************************************ 00:10:12.220 END TEST accel_xor 00:10:12.220 ************************************ 00:10:12.220 08:44:29 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:12.220 08:44:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:12.220 08:44:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.220 08:44:29 -- common/autotest_common.sh@10 -- # set +x 00:10:12.220 ************************************ 00:10:12.220 START TEST accel_dif_verify 00:10:12.220 ************************************ 00:10:12.220 08:44:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:10:12.220 08:44:29 -- accel/accel.sh@16 -- # local accel_opc 00:10:12.220 08:44:29 -- accel/accel.sh@17 -- # local accel_module 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.220 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.220 08:44:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:12.220 08:44:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:12.220 08:44:29 -- accel/accel.sh@12 -- # build_accel_config 00:10:12.220 08:44:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:12.220 08:44:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:12.220 08:44:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:12.220 08:44:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:12.220 08:44:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:12.220 08:44:29 -- accel/accel.sh@40 -- # local IFS=, 00:10:12.220 08:44:29 -- accel/accel.sh@41 -- # jq -r . 00:10:12.220 [2024-04-26 08:44:29.330767] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:12.220 [2024-04-26 08:44:29.330832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1928293 ] 00:10:12.220 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.220 [2024-04-26 08:44:29.401938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.478 [2024-04-26 08:44:29.473733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.478 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.478 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.478 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.478 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.478 08:44:29 -- accel/accel.sh@20 -- # val=0x1 00:10:12.478 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.478 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.478 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.478 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.478 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.478 08:44:29 -- accel/accel.sh@20 -- # val=dif_verify 00:10:12.478 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.478 08:44:29 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.478 08:44:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:12.478 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.478 08:44:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:12.478 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.478 08:44:29 -- accel/accel.sh@20 -- # val='512 bytes' 00:10:12.478 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.478 08:44:29 -- accel/accel.sh@20 -- # val='8 bytes' 00:10:12.478 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.478 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.478 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.479 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.479 08:44:29 -- accel/accel.sh@20 -- # val=software 00:10:12.479 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.479 08:44:29 -- accel/accel.sh@22 -- # accel_module=software 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.479 08:44:29 -- accel/accel.sh@20 -- # val=32 00:10:12.479 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.479 08:44:29 -- accel/accel.sh@20 -- # val=32 00:10:12.479 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.479 08:44:29 -- accel/accel.sh@20 -- # val=1 00:10:12.479 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.479 08:44:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:12.479 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.479 08:44:29 -- accel/accel.sh@20 -- # val=No 00:10:12.479 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.479 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.479 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:12.479 08:44:29 -- accel/accel.sh@20 -- # val= 00:10:12.479 08:44:29 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # IFS=: 00:10:12.479 08:44:29 -- accel/accel.sh@19 -- # read -r var val 00:10:13.856 08:44:30 -- accel/accel.sh@20 -- # val= 00:10:13.856 08:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # IFS=: 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # read -r var val 00:10:13.856 08:44:30 -- accel/accel.sh@20 -- # val= 00:10:13.856 08:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # IFS=: 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # read -r var val 00:10:13.856 08:44:30 -- accel/accel.sh@20 -- # val= 00:10:13.856 08:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # IFS=: 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # read -r var val 00:10:13.856 08:44:30 -- accel/accel.sh@20 -- # val= 00:10:13.856 08:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # IFS=: 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # read -r var val 00:10:13.856 08:44:30 -- accel/accel.sh@20 -- # val= 00:10:13.856 08:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # IFS=: 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # read -r var val 00:10:13.856 08:44:30 -- accel/accel.sh@20 -- # val= 00:10:13.856 08:44:30 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # IFS=: 00:10:13.856 08:44:30 -- accel/accel.sh@19 -- # read -r var val 00:10:13.856 08:44:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:13.856 08:44:30 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:13.856 08:44:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:13.856 00:10:13.856 real 0m1.366s 00:10:13.856 user 0m1.244s 00:10:13.856 sys 0m0.127s 00:10:13.856 08:44:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:13.856 08:44:30 -- common/autotest_common.sh@10 -- # set +x 00:10:13.856 ************************************ 00:10:13.856 END TEST accel_dif_verify 00:10:13.856 ************************************ 00:10:13.856 08:44:30 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:13.856 08:44:30 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:13.856 08:44:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:13.856 08:44:30 -- common/autotest_common.sh@10 -- # set +x 00:10:13.856 ************************************ 00:10:13.856 START TEST accel_dif_generate 00:10:13.856 ************************************ 00:10:13.857 08:44:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:10:13.857 08:44:30 -- accel/accel.sh@16 -- # local accel_opc 00:10:13.857 08:44:30 -- accel/accel.sh@17 -- # local accel_module 00:10:13.857 08:44:30 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:30 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:13.857 08:44:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:13.857 08:44:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:13.857 08:44:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:13.857 08:44:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:13.857 08:44:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.857 08:44:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.857 08:44:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:13.857 08:44:30 -- accel/accel.sh@40 -- # local IFS=, 00:10:13.857 08:44:30 -- accel/accel.sh@41 -- # jq -r . 00:10:13.857 [2024-04-26 08:44:30.872403] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:13.857 [2024-04-26 08:44:30.872492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1928592 ] 00:10:13.857 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.857 [2024-04-26 08:44:30.946290] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.857 [2024-04-26 08:44:31.016657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val= 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val= 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val=0x1 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val= 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val= 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val=dif_generate 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val='512 bytes' 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val='8 bytes' 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val= 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val=software 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@22 -- # accel_module=software 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val=32 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val=32 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val=1 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val=No 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val= 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:13.857 08:44:31 -- accel/accel.sh@20 -- # val= 00:10:13.857 08:44:31 -- accel/accel.sh@21 -- # case "$var" in 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # IFS=: 00:10:13.857 08:44:31 -- accel/accel.sh@19 -- # read -r var val 00:10:15.233 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.233 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.233 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.233 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.233 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.233 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.233 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.233 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.233 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.233 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.233 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.233 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.233 08:44:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:15.233 08:44:32 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:15.233 08:44:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:15.233 00:10:15.233 real 0m1.365s 00:10:15.233 user 0m1.238s 00:10:15.233 sys 0m0.132s 00:10:15.233 08:44:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:15.233 08:44:32 -- common/autotest_common.sh@10 -- # set +x 00:10:15.233 ************************************ 00:10:15.233 END TEST accel_dif_generate 00:10:15.233 ************************************ 00:10:15.233 08:44:32 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:15.233 08:44:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:15.233 08:44:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:15.233 08:44:32 -- common/autotest_common.sh@10 -- # set +x 00:10:15.233 ************************************ 00:10:15.233 START TEST accel_dif_generate_copy 00:10:15.233 ************************************ 00:10:15.233 08:44:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:10:15.233 08:44:32 -- accel/accel.sh@16 -- # local accel_opc 00:10:15.233 08:44:32 -- accel/accel.sh@17 -- # local accel_module 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.233 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.233 08:44:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:15.233 08:44:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:15.233 08:44:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:15.233 08:44:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:15.233 08:44:32 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:15.233 08:44:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.233 08:44:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.233 08:44:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:15.233 08:44:32 -- accel/accel.sh@40 -- # local IFS=, 00:10:15.233 08:44:32 -- accel/accel.sh@41 -- # jq -r . 00:10:15.233 [2024-04-26 08:44:32.425409] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:15.233 [2024-04-26 08:44:32.425471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1928882 ] 00:10:15.233 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.493 [2024-04-26 08:44:32.495817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.493 [2024-04-26 08:44:32.562710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val=0x1 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val=software 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@22 -- # accel_module=software 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val=32 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val=32 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val=1 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val=No 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:15.493 08:44:32 -- accel/accel.sh@20 -- # val= 00:10:15.493 08:44:32 -- accel/accel.sh@21 -- # case "$var" in 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # IFS=: 00:10:15.493 08:44:32 -- accel/accel.sh@19 -- # read -r var val 00:10:16.868 08:44:33 -- accel/accel.sh@20 -- # val= 00:10:16.868 08:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # IFS=: 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # read -r var val 00:10:16.868 08:44:33 -- accel/accel.sh@20 -- # val= 00:10:16.868 08:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # IFS=: 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # read -r var val 00:10:16.868 08:44:33 -- accel/accel.sh@20 -- # val= 00:10:16.868 08:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # IFS=: 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # read -r var val 00:10:16.868 08:44:33 -- accel/accel.sh@20 -- # val= 00:10:16.868 08:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # IFS=: 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # read -r var val 00:10:16.868 08:44:33 -- accel/accel.sh@20 -- # val= 00:10:16.868 08:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # IFS=: 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # read -r var val 00:10:16.868 08:44:33 -- accel/accel.sh@20 -- # val= 00:10:16.868 08:44:33 -- accel/accel.sh@21 -- # case "$var" in 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # IFS=: 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # read -r var val 00:10:16.868 08:44:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:16.868 08:44:33 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:16.868 08:44:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:16.868 00:10:16.868 real 0m1.357s 00:10:16.868 user 0m1.231s 00:10:16.868 sys 0m0.130s 00:10:16.868 08:44:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:16.868 08:44:33 -- common/autotest_common.sh@10 -- # set +x 00:10:16.868 ************************************ 00:10:16.868 END TEST accel_dif_generate_copy 00:10:16.868 ************************************ 00:10:16.868 08:44:33 -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:16.868 08:44:33 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:16.868 08:44:33 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:16.868 08:44:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.868 08:44:33 -- common/autotest_common.sh@10 -- # set +x 00:10:16.868 ************************************ 00:10:16.868 START TEST accel_comp 00:10:16.868 ************************************ 00:10:16.868 08:44:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:16.868 08:44:33 -- accel/accel.sh@16 -- # local accel_opc 00:10:16.868 08:44:33 -- accel/accel.sh@17 -- # local accel_module 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # IFS=: 00:10:16.868 08:44:33 -- accel/accel.sh@19 -- # read -r var val 00:10:16.868 08:44:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:16.868 08:44:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:16.868 08:44:33 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.868 08:44:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:16.868 08:44:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:16.868 08:44:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.868 08:44:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.868 08:44:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:16.868 08:44:33 -- accel/accel.sh@40 -- # local IFS=, 00:10:16.868 08:44:33 -- accel/accel.sh@41 -- # jq -r . 00:10:16.868 [2024-04-26 08:44:33.952253] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:16.868 [2024-04-26 08:44:33.952307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1929171 ] 00:10:16.868 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.868 [2024-04-26 08:44:34.021664] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.868 [2024-04-26 08:44:34.088883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val= 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val= 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val= 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val=0x1 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val= 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val= 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val=compress 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@23 -- # accel_opc=compress 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val= 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val=software 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@22 -- # accel_module=software 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val=32 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val=32 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val=1 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val=No 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val= 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:17.127 08:44:34 -- accel/accel.sh@20 -- # val= 00:10:17.127 08:44:34 -- accel/accel.sh@21 -- # case "$var" in 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # IFS=: 00:10:17.127 08:44:34 -- accel/accel.sh@19 -- # read -r var val 00:10:18.063 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.063 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.063 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.063 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.063 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.063 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.063 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.063 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.063 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.063 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.063 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.063 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.063 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.063 08:44:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:18.063 08:44:35 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:18.063 08:44:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:18.063 00:10:18.063 real 0m1.360s 00:10:18.063 user 0m1.241s 00:10:18.063 sys 0m0.121s 00:10:18.063 08:44:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:18.063 08:44:35 -- common/autotest_common.sh@10 -- # set +x 00:10:18.063 ************************************ 00:10:18.063 END TEST accel_comp 00:10:18.063 ************************************ 00:10:18.322 08:44:35 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:18.322 08:44:35 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:18.322 08:44:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.322 08:44:35 -- common/autotest_common.sh@10 -- # set +x 00:10:18.322 ************************************ 00:10:18.322 START TEST accel_decomp 00:10:18.322 ************************************ 00:10:18.322 08:44:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:18.322 08:44:35 -- accel/accel.sh@16 -- # local accel_opc 00:10:18.322 08:44:35 -- accel/accel.sh@17 -- # local accel_module 00:10:18.322 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.322 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.322 08:44:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:18.322 08:44:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:10:18.322 08:44:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.322 08:44:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:18.322 08:44:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:18.322 08:44:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.322 08:44:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.322 08:44:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:18.322 08:44:35 -- accel/accel.sh@40 -- # local IFS=, 00:10:18.322 08:44:35 -- accel/accel.sh@41 -- # jq -r . 00:10:18.322 [2024-04-26 08:44:35.499929] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:18.322 [2024-04-26 08:44:35.499985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1929453 ] 00:10:18.322 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.581 [2024-04-26 08:44:35.570236] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.581 [2024-04-26 08:44:35.638414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val=0x1 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val=decompress 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val=software 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@22 -- # accel_module=software 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val=32 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val=32 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val=1 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val=Yes 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:18.581 08:44:35 -- accel/accel.sh@20 -- # val= 00:10:18.581 08:44:35 -- accel/accel.sh@21 -- # case "$var" in 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # IFS=: 00:10:18.581 08:44:35 -- accel/accel.sh@19 -- # read -r var val 00:10:19.958 08:44:36 -- accel/accel.sh@20 -- # val= 00:10:19.958 08:44:36 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # IFS=: 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # read -r var val 00:10:19.958 08:44:36 -- accel/accel.sh@20 -- # val= 00:10:19.958 08:44:36 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # IFS=: 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # read -r var val 00:10:19.958 08:44:36 -- accel/accel.sh@20 -- # val= 00:10:19.958 08:44:36 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # IFS=: 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # read -r var val 00:10:19.958 08:44:36 -- accel/accel.sh@20 -- # val= 00:10:19.958 08:44:36 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # IFS=: 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # read -r var val 00:10:19.958 08:44:36 -- accel/accel.sh@20 -- # val= 00:10:19.958 08:44:36 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # IFS=: 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # read -r var val 00:10:19.958 08:44:36 -- accel/accel.sh@20 -- # val= 00:10:19.958 08:44:36 -- accel/accel.sh@21 -- # case "$var" in 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # IFS=: 00:10:19.958 08:44:36 -- accel/accel.sh@19 -- # read -r var val 00:10:19.958 08:44:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:19.958 08:44:36 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:19.958 08:44:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:19.958 00:10:19.958 real 0m1.361s 00:10:19.958 user 0m1.240s 00:10:19.958 sys 0m0.123s 00:10:19.958 08:44:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:19.958 08:44:36 -- common/autotest_common.sh@10 -- # set +x 00:10:19.958 ************************************ 00:10:19.958 END TEST accel_decomp 00:10:19.958 ************************************ 00:10:19.958 08:44:36 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:19.958 08:44:36 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:19.958 08:44:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.958 08:44:36 -- common/autotest_common.sh@10 -- # set +x 00:10:19.958 ************************************ 00:10:19.958 START TEST accel_decmop_full 00:10:19.958 ************************************ 00:10:19.958 08:44:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:19.958 08:44:37 -- accel/accel.sh@16 -- # local accel_opc 00:10:19.958 08:44:37 -- accel/accel.sh@17 -- # local accel_module 00:10:19.958 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:19.958 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:19.958 08:44:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:19.958 08:44:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:19.958 08:44:37 -- accel/accel.sh@12 -- # build_accel_config 00:10:19.958 08:44:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:19.958 08:44:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:19.958 08:44:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.958 08:44:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.958 08:44:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:19.958 08:44:37 -- accel/accel.sh@40 -- # local IFS=, 00:10:19.958 08:44:37 -- accel/accel.sh@41 -- # jq -r . 00:10:19.958 [2024-04-26 08:44:37.046283] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:19.958 [2024-04-26 08:44:37.046337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1929743 ] 00:10:19.958 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.958 [2024-04-26 08:44:37.115510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.958 [2024-04-26 08:44:37.182804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val= 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val= 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val= 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val=0x1 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val= 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val= 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val=decompress 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val= 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val=software 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@22 -- # accel_module=software 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val=32 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val=32 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val=1 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val=Yes 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val= 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:20.243 08:44:37 -- accel/accel.sh@20 -- # val= 00:10:20.243 08:44:37 -- accel/accel.sh@21 -- # case "$var" in 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # IFS=: 00:10:20.243 08:44:37 -- accel/accel.sh@19 -- # read -r var val 00:10:21.179 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.179 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.179 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.179 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.179 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.179 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.179 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.179 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.179 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.179 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.179 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.179 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.179 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.179 08:44:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:21.179 08:44:38 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:21.179 08:44:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:21.179 00:10:21.179 real 0m1.366s 00:10:21.179 user 0m1.236s 00:10:21.179 sys 0m0.131s 00:10:21.179 08:44:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:21.179 08:44:38 -- common/autotest_common.sh@10 -- # set +x 00:10:21.179 ************************************ 00:10:21.179 END TEST accel_decmop_full 00:10:21.179 ************************************ 00:10:21.179 08:44:38 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:21.179 08:44:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:21.179 08:44:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.179 08:44:38 -- common/autotest_common.sh@10 -- # set +x 00:10:21.439 ************************************ 00:10:21.439 START TEST accel_decomp_mcore 00:10:21.439 ************************************ 00:10:21.439 08:44:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:21.439 08:44:38 -- accel/accel.sh@16 -- # local accel_opc 00:10:21.439 08:44:38 -- accel/accel.sh@17 -- # local accel_module 00:10:21.439 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.439 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.440 08:44:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:21.440 08:44:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:21.440 08:44:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.440 08:44:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:21.440 08:44:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:21.440 08:44:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.440 08:44:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.440 08:44:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:21.440 08:44:38 -- accel/accel.sh@40 -- # local IFS=, 00:10:21.440 08:44:38 -- accel/accel.sh@41 -- # jq -r . 00:10:21.440 [2024-04-26 08:44:38.602629] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:21.440 [2024-04-26 08:44:38.602683] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930035 ] 00:10:21.440 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.440 [2024-04-26 08:44:38.672087] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.698 [2024-04-26 08:44:38.744783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.698 [2024-04-26 08:44:38.744877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.698 [2024-04-26 08:44:38.744978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.698 [2024-04-26 08:44:38.744980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.698 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.698 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.698 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.698 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.698 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.698 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.698 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.698 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.698 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.698 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.698 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.698 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.698 08:44:38 -- accel/accel.sh@20 -- # val=0xf 00:10:21.698 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.698 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.698 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.698 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.698 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.698 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val=decompress 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val=software 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@22 -- # accel_module=software 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val=32 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val=32 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val=1 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val=Yes 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:21.699 08:44:38 -- accel/accel.sh@20 -- # val= 00:10:21.699 08:44:38 -- accel/accel.sh@21 -- # case "$var" in 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # IFS=: 00:10:21.699 08:44:38 -- accel/accel.sh@19 -- # read -r var val 00:10:23.075 08:44:39 -- accel/accel.sh@20 -- # val= 00:10:23.075 08:44:39 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.075 08:44:39 -- accel/accel.sh@19 -- # IFS=: 00:10:23.075 08:44:39 -- accel/accel.sh@19 -- # read -r var val 00:10:23.075 08:44:39 -- accel/accel.sh@20 -- # val= 00:10:23.075 08:44:39 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.075 08:44:39 -- accel/accel.sh@19 -- # IFS=: 00:10:23.075 08:44:39 -- accel/accel.sh@19 -- # read -r var val 00:10:23.075 08:44:39 -- accel/accel.sh@20 -- # val= 00:10:23.075 08:44:39 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.075 08:44:39 -- accel/accel.sh@19 -- # IFS=: 00:10:23.075 08:44:39 -- accel/accel.sh@19 -- # read -r var val 00:10:23.075 08:44:39 -- accel/accel.sh@20 -- # val= 00:10:23.075 08:44:39 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.075 08:44:39 -- accel/accel.sh@19 -- # IFS=: 00:10:23.075 08:44:39 -- accel/accel.sh@19 -- # read -r var val 00:10:23.075 08:44:39 -- accel/accel.sh@20 -- # val= 00:10:23.076 08:44:39 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.076 08:44:39 -- accel/accel.sh@19 -- # IFS=: 00:10:23.076 08:44:39 -- accel/accel.sh@19 -- # read -r var val 00:10:23.076 08:44:39 -- accel/accel.sh@20 -- # val= 00:10:23.076 08:44:39 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.076 08:44:39 -- accel/accel.sh@19 -- # IFS=: 00:10:23.076 08:44:39 -- accel/accel.sh@19 -- # read -r var val 00:10:23.076 08:44:39 -- accel/accel.sh@20 -- # val= 00:10:23.076 08:44:39 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.076 08:44:39 -- accel/accel.sh@19 -- # IFS=: 00:10:23.076 08:44:39 -- accel/accel.sh@19 -- # read -r var val 00:10:23.076 08:44:39 -- accel/accel.sh@20 -- # val= 00:10:23.076 08:44:39 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.076 08:44:39 -- accel/accel.sh@19 -- # IFS=: 00:10:23.076 08:44:39 -- accel/accel.sh@19 -- # read -r var val 00:10:23.076 08:44:39 -- accel/accel.sh@20 -- # val= 00:10:23.076 08:44:39 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.076 08:44:39 -- accel/accel.sh@19 -- # IFS=: 00:10:23.076 08:44:39 -- accel/accel.sh@19 -- # read -r var val 00:10:23.076 08:44:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:23.076 08:44:39 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:23.076 08:44:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.076 00:10:23.076 real 0m1.379s 00:10:23.076 user 0m4.586s 00:10:23.076 sys 0m0.139s 00:10:23.076 08:44:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:23.076 08:44:39 -- common/autotest_common.sh@10 -- # set +x 00:10:23.076 ************************************ 00:10:23.076 END TEST accel_decomp_mcore 00:10:23.076 ************************************ 00:10:23.076 08:44:39 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:23.076 08:44:39 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:23.076 08:44:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.076 08:44:39 -- common/autotest_common.sh@10 -- # set +x 00:10:23.076 ************************************ 00:10:23.076 START TEST accel_decomp_full_mcore 00:10:23.076 ************************************ 00:10:23.076 08:44:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:23.076 08:44:40 -- accel/accel.sh@16 -- # local accel_opc 00:10:23.076 08:44:40 -- accel/accel.sh@17 -- # local accel_module 00:10:23.076 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.076 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.076 08:44:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:23.076 08:44:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:23.076 08:44:40 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.076 08:44:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.076 08:44:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:23.076 08:44:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.076 08:44:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.076 08:44:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:23.076 08:44:40 -- accel/accel.sh@40 -- # local IFS=, 00:10:23.076 08:44:40 -- accel/accel.sh@41 -- # jq -r . 00:10:23.076 [2024-04-26 08:44:40.189874] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:23.076 [2024-04-26 08:44:40.189930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930345 ] 00:10:23.076 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.076 [2024-04-26 08:44:40.261347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.335 [2024-04-26 08:44:40.333765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.335 [2024-04-26 08:44:40.333858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.335 [2024-04-26 08:44:40.333951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.335 [2024-04-26 08:44:40.333954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val= 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val= 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val= 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val=0xf 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val= 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val= 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val=decompress 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val= 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val=software 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@22 -- # accel_module=software 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val=32 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val=32 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val=1 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val=Yes 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val= 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:23.335 08:44:40 -- accel/accel.sh@20 -- # val= 00:10:23.335 08:44:40 -- accel/accel.sh@21 -- # case "$var" in 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # IFS=: 00:10:23.335 08:44:40 -- accel/accel.sh@19 -- # read -r var val 00:10:24.711 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.711 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.711 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.711 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.711 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.711 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.711 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.711 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.711 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.711 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.711 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.711 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.711 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.711 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.711 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.711 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.711 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.711 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.711 08:44:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:24.711 08:44:41 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:24.711 08:44:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:24.711 00:10:24.711 real 0m1.393s 00:10:24.711 user 0m4.623s 00:10:24.711 sys 0m0.144s 00:10:24.711 08:44:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:24.711 08:44:41 -- common/autotest_common.sh@10 -- # set +x 00:10:24.711 ************************************ 00:10:24.711 END TEST accel_decomp_full_mcore 00:10:24.711 ************************************ 00:10:24.711 08:44:41 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:24.711 08:44:41 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:24.711 08:44:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.711 08:44:41 -- common/autotest_common.sh@10 -- # set +x 00:10:24.711 ************************************ 00:10:24.711 START TEST accel_decomp_mthread 00:10:24.711 ************************************ 00:10:24.711 08:44:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:24.711 08:44:41 -- accel/accel.sh@16 -- # local accel_opc 00:10:24.711 08:44:41 -- accel/accel.sh@17 -- # local accel_module 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.711 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.711 08:44:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:24.711 08:44:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:24.711 08:44:41 -- accel/accel.sh@12 -- # build_accel_config 00:10:24.711 08:44:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:24.711 08:44:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:24.711 08:44:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.711 08:44:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.711 08:44:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:24.711 08:44:41 -- accel/accel.sh@40 -- # local IFS=, 00:10:24.711 08:44:41 -- accel/accel.sh@41 -- # jq -r . 00:10:24.711 [2024-04-26 08:44:41.788030] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:24.711 [2024-04-26 08:44:41.788089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930668 ] 00:10:24.711 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.711 [2024-04-26 08:44:41.859590] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.711 [2024-04-26 08:44:41.927222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.969 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.969 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.969 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.969 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.969 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.969 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.969 08:44:41 -- accel/accel.sh@20 -- # val=0x1 00:10:24.969 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.969 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.969 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.969 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.969 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.969 08:44:41 -- accel/accel.sh@20 -- # val=decompress 00:10:24.969 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.969 08:44:41 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.969 08:44:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:24.969 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.969 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.969 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.970 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.970 08:44:41 -- accel/accel.sh@20 -- # val=software 00:10:24.970 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.970 08:44:41 -- accel/accel.sh@22 -- # accel_module=software 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.970 08:44:41 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:24.970 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.970 08:44:41 -- accel/accel.sh@20 -- # val=32 00:10:24.970 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.970 08:44:41 -- accel/accel.sh@20 -- # val=32 00:10:24.970 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.970 08:44:41 -- accel/accel.sh@20 -- # val=2 00:10:24.970 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.970 08:44:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:24.970 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.970 08:44:41 -- accel/accel.sh@20 -- # val=Yes 00:10:24.970 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.970 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.970 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:24.970 08:44:41 -- accel/accel.sh@20 -- # val= 00:10:24.970 08:44:41 -- accel/accel.sh@21 -- # case "$var" in 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # IFS=: 00:10:24.970 08:44:41 -- accel/accel.sh@19 -- # read -r var val 00:10:25.906 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:25.906 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:25.906 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:25.906 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:25.906 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:25.906 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:25.906 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:25.906 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:25.906 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:25.906 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:25.906 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:25.906 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:25.906 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:25.906 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:25.906 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:25.906 08:44:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:25.906 08:44:43 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:25.906 08:44:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:25.906 00:10:25.906 real 0m1.373s 00:10:25.906 user 0m1.245s 00:10:25.906 sys 0m0.143s 00:10:25.906 08:44:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:25.906 08:44:43 -- common/autotest_common.sh@10 -- # set +x 00:10:25.906 ************************************ 00:10:25.906 END TEST accel_decomp_mthread 00:10:25.906 ************************************ 00:10:26.165 08:44:43 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:26.165 08:44:43 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:26.165 08:44:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:26.165 08:44:43 -- common/autotest_common.sh@10 -- # set +x 00:10:26.165 ************************************ 00:10:26.165 START TEST accel_deomp_full_mthread 00:10:26.165 ************************************ 00:10:26.165 08:44:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:26.165 08:44:43 -- accel/accel.sh@16 -- # local accel_opc 00:10:26.165 08:44:43 -- accel/accel.sh@17 -- # local accel_module 00:10:26.165 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.165 08:44:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:26.165 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.165 08:44:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:26.165 08:44:43 -- accel/accel.sh@12 -- # build_accel_config 00:10:26.165 08:44:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:26.165 08:44:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:26.165 08:44:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.165 08:44:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.165 08:44:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:26.165 08:44:43 -- accel/accel.sh@40 -- # local IFS=, 00:10:26.165 08:44:43 -- accel/accel.sh@41 -- # jq -r . 00:10:26.165 [2024-04-26 08:44:43.368972] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:26.165 [2024-04-26 08:44:43.369031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930984 ] 00:10:26.165 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.424 [2024-04-26 08:44:43.441491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.424 [2024-04-26 08:44:43.510790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val=0x1 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val=decompress 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val=software 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@22 -- # accel_module=software 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val=32 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val=32 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val=2 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val=Yes 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:26.424 08:44:43 -- accel/accel.sh@20 -- # val= 00:10:26.424 08:44:43 -- accel/accel.sh@21 -- # case "$var" in 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # IFS=: 00:10:26.424 08:44:43 -- accel/accel.sh@19 -- # read -r var val 00:10:27.813 08:44:44 -- accel/accel.sh@20 -- # val= 00:10:27.813 08:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # IFS=: 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # read -r var val 00:10:27.813 08:44:44 -- accel/accel.sh@20 -- # val= 00:10:27.813 08:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # IFS=: 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # read -r var val 00:10:27.813 08:44:44 -- accel/accel.sh@20 -- # val= 00:10:27.813 08:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # IFS=: 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # read -r var val 00:10:27.813 08:44:44 -- accel/accel.sh@20 -- # val= 00:10:27.813 08:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # IFS=: 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # read -r var val 00:10:27.813 08:44:44 -- accel/accel.sh@20 -- # val= 00:10:27.813 08:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # IFS=: 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # read -r var val 00:10:27.813 08:44:44 -- accel/accel.sh@20 -- # val= 00:10:27.813 08:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # IFS=: 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # read -r var val 00:10:27.813 08:44:44 -- accel/accel.sh@20 -- # val= 00:10:27.813 08:44:44 -- accel/accel.sh@21 -- # case "$var" in 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # IFS=: 00:10:27.813 08:44:44 -- accel/accel.sh@19 -- # read -r var val 00:10:27.813 08:44:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:27.813 08:44:44 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:27.813 08:44:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:27.813 00:10:27.813 real 0m1.392s 00:10:27.813 user 0m1.275s 00:10:27.813 sys 0m0.130s 00:10:27.813 08:44:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:27.813 08:44:44 -- common/autotest_common.sh@10 -- # set +x 00:10:27.813 ************************************ 00:10:27.813 END TEST accel_deomp_full_mthread 00:10:27.813 ************************************ 00:10:27.813 08:44:44 -- accel/accel.sh@124 -- # [[ n == y ]] 00:10:27.813 08:44:44 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:27.814 08:44:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:27.814 08:44:44 -- accel/accel.sh@137 -- # build_accel_config 00:10:27.814 08:44:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:27.814 08:44:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:27.814 08:44:44 -- common/autotest_common.sh@10 -- # set +x 00:10:27.814 08:44:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:27.814 08:44:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.814 08:44:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.814 08:44:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:27.814 08:44:44 -- accel/accel.sh@40 -- # local IFS=, 00:10:27.814 08:44:44 -- accel/accel.sh@41 -- # jq -r . 00:10:27.814 ************************************ 00:10:27.814 START TEST accel_dif_functional_tests 00:10:27.814 ************************************ 00:10:27.814 08:44:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:27.814 [2024-04-26 08:44:44.983557] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:27.814 [2024-04-26 08:44:44.983608] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931293 ] 00:10:27.814 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.814 [2024-04-26 08:44:45.052640] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:28.072 [2024-04-26 08:44:45.125307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.072 [2024-04-26 08:44:45.125404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.072 [2024-04-26 08:44:45.125405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.072 00:10:28.072 00:10:28.072 CUnit - A unit testing framework for C - Version 2.1-3 00:10:28.072 http://cunit.sourceforge.net/ 00:10:28.072 00:10:28.072 00:10:28.072 Suite: accel_dif 00:10:28.072 Test: verify: DIF generated, GUARD check ...passed 00:10:28.072 Test: verify: DIF generated, APPTAG check ...passed 00:10:28.072 Test: verify: DIF generated, REFTAG check ...passed 00:10:28.072 Test: verify: DIF not generated, GUARD check ...[2024-04-26 08:44:45.193580] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:28.072 [2024-04-26 08:44:45.193627] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:28.072 passed 00:10:28.072 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 08:44:45.193674] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:28.072 [2024-04-26 08:44:45.193691] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:28.072 passed 00:10:28.072 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 08:44:45.193713] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:28.072 [2024-04-26 08:44:45.193731] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:28.072 passed 00:10:28.072 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:28.072 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 08:44:45.193775] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:28.072 passed 00:10:28.072 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:28.072 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:28.072 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:28.072 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 08:44:45.193881] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:28.072 passed 00:10:28.072 Test: generate copy: DIF generated, GUARD check ...passed 00:10:28.072 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:28.072 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:28.072 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:28.072 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:28.072 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:28.072 Test: generate copy: iovecs-len validate ...[2024-04-26 08:44:45.194062] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:28.072 passed 00:10:28.072 Test: generate copy: buffer alignment validate ...passed 00:10:28.072 00:10:28.072 Run Summary: Type Total Ran Passed Failed Inactive 00:10:28.073 suites 1 1 n/a 0 0 00:10:28.073 tests 20 20 20 0 0 00:10:28.073 asserts 204 204 204 0 n/a 00:10:28.073 00:10:28.073 Elapsed time = 0.002 seconds 00:10:28.332 00:10:28.332 real 0m0.442s 00:10:28.332 user 0m0.598s 00:10:28.332 sys 0m0.157s 00:10:28.332 08:44:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:28.332 08:44:45 -- common/autotest_common.sh@10 -- # set +x 00:10:28.332 ************************************ 00:10:28.332 END TEST accel_dif_functional_tests 00:10:28.332 ************************************ 00:10:28.332 00:10:28.332 real 0m34.977s 00:10:28.332 user 0m35.895s 00:10:28.332 sys 0m6.337s 00:10:28.332 08:44:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:28.332 08:44:45 -- common/autotest_common.sh@10 -- # set +x 00:10:28.332 ************************************ 00:10:28.332 END TEST accel 00:10:28.332 ************************************ 00:10:28.332 08:44:45 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:28.332 08:44:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:28.332 08:44:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:28.332 08:44:45 -- common/autotest_common.sh@10 -- # set +x 00:10:28.591 ************************************ 00:10:28.591 START TEST accel_rpc 00:10:28.591 ************************************ 00:10:28.591 08:44:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:28.591 * Looking for test storage... 00:10:28.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:10:28.591 08:44:45 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:28.591 08:44:45 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1931513 00:10:28.591 08:44:45 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:28.591 08:44:45 -- accel/accel_rpc.sh@15 -- # waitforlisten 1931513 00:10:28.591 08:44:45 -- common/autotest_common.sh@817 -- # '[' -z 1931513 ']' 00:10:28.591 08:44:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.591 08:44:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:28.591 08:44:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.591 08:44:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:28.591 08:44:45 -- common/autotest_common.sh@10 -- # set +x 00:10:28.591 [2024-04-26 08:44:45.796800] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:28.591 [2024-04-26 08:44:45.796849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931513 ] 00:10:28.591 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.849 [2024-04-26 08:44:45.864903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.849 [2024-04-26 08:44:45.931448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.415 08:44:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:29.415 08:44:46 -- common/autotest_common.sh@850 -- # return 0 00:10:29.415 08:44:46 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:29.415 08:44:46 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:29.415 08:44:46 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:29.415 08:44:46 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:29.415 08:44:46 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:29.415 08:44:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:29.415 08:44:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:29.416 08:44:46 -- common/autotest_common.sh@10 -- # set +x 00:10:29.674 ************************************ 00:10:29.674 START TEST accel_assign_opcode 00:10:29.674 ************************************ 00:10:29.674 08:44:46 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:10:29.674 08:44:46 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:29.674 08:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:29.674 08:44:46 -- common/autotest_common.sh@10 -- # set +x 00:10:29.674 [2024-04-26 08:44:46.745851] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:29.674 08:44:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:29.674 08:44:46 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:29.674 08:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:29.674 08:44:46 -- common/autotest_common.sh@10 -- # set +x 00:10:29.674 [2024-04-26 08:44:46.753861] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:29.674 08:44:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:29.674 08:44:46 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:29.674 08:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:29.674 08:44:46 -- common/autotest_common.sh@10 -- # set +x 00:10:29.932 08:44:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:29.932 08:44:46 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:29.932 08:44:46 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:29.932 08:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:29.932 08:44:46 -- accel/accel_rpc.sh@42 -- # grep software 00:10:29.932 08:44:46 -- common/autotest_common.sh@10 -- # set +x 00:10:29.932 08:44:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:29.932 software 00:10:29.932 00:10:29.932 real 0m0.237s 00:10:29.932 user 0m0.047s 00:10:29.932 sys 0m0.011s 00:10:29.932 08:44:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:29.932 08:44:46 -- common/autotest_common.sh@10 -- # set +x 00:10:29.932 ************************************ 00:10:29.932 END TEST accel_assign_opcode 00:10:29.932 ************************************ 00:10:29.932 08:44:47 -- accel/accel_rpc.sh@55 -- # killprocess 1931513 00:10:29.932 08:44:47 -- common/autotest_common.sh@936 -- # '[' -z 1931513 ']' 00:10:29.932 08:44:47 -- common/autotest_common.sh@940 -- # kill -0 1931513 00:10:29.932 08:44:47 -- common/autotest_common.sh@941 -- # uname 00:10:29.932 08:44:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:29.932 08:44:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1931513 00:10:29.932 08:44:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:29.932 08:44:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:29.932 08:44:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1931513' 00:10:29.932 killing process with pid 1931513 00:10:29.932 08:44:47 -- common/autotest_common.sh@955 -- # kill 1931513 00:10:29.932 08:44:47 -- common/autotest_common.sh@960 -- # wait 1931513 00:10:30.191 00:10:30.191 real 0m1.769s 00:10:30.191 user 0m1.853s 00:10:30.191 sys 0m0.526s 00:10:30.191 08:44:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:30.191 08:44:47 -- common/autotest_common.sh@10 -- # set +x 00:10:30.191 ************************************ 00:10:30.191 END TEST accel_rpc 00:10:30.191 ************************************ 00:10:30.450 08:44:47 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:30.450 08:44:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:30.450 08:44:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:30.450 08:44:47 -- common/autotest_common.sh@10 -- # set +x 00:10:30.450 ************************************ 00:10:30.450 START TEST app_cmdline 00:10:30.450 ************************************ 00:10:30.450 08:44:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:30.709 * Looking for test storage... 00:10:30.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:30.709 08:44:47 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:30.709 08:44:47 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1931874 00:10:30.709 08:44:47 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:30.709 08:44:47 -- app/cmdline.sh@18 -- # waitforlisten 1931874 00:10:30.709 08:44:47 -- common/autotest_common.sh@817 -- # '[' -z 1931874 ']' 00:10:30.709 08:44:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.709 08:44:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:30.709 08:44:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.709 08:44:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:30.709 08:44:47 -- common/autotest_common.sh@10 -- # set +x 00:10:30.709 [2024-04-26 08:44:47.780392] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:10:30.709 [2024-04-26 08:44:47.780444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1931874 ] 00:10:30.709 EAL: No free 2048 kB hugepages reported on node 1 00:10:30.709 [2024-04-26 08:44:47.850066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.709 [2024-04-26 08:44:47.921652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.643 08:44:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:31.643 08:44:48 -- common/autotest_common.sh@850 -- # return 0 00:10:31.643 08:44:48 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:31.643 { 00:10:31.643 "version": "SPDK v24.05-pre git sha1 f8d98be2d", 00:10:31.643 "fields": { 00:10:31.643 "major": 24, 00:10:31.643 "minor": 5, 00:10:31.643 "patch": 0, 00:10:31.643 "suffix": "-pre", 00:10:31.643 "commit": "f8d98be2d" 00:10:31.643 } 00:10:31.643 } 00:10:31.643 08:44:48 -- app/cmdline.sh@22 -- # expected_methods=() 00:10:31.643 08:44:48 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:31.643 08:44:48 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:31.643 08:44:48 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:31.643 08:44:48 -- app/cmdline.sh@26 -- # sort 00:10:31.643 08:44:48 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:31.643 08:44:48 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:31.643 08:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.643 08:44:48 -- common/autotest_common.sh@10 -- # set +x 00:10:31.643 08:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:31.643 08:44:48 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:31.643 08:44:48 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:31.643 08:44:48 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:31.643 08:44:48 -- common/autotest_common.sh@638 -- # local es=0 00:10:31.643 08:44:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:31.643 08:44:48 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.643 08:44:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:31.643 08:44:48 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.643 08:44:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:31.643 08:44:48 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.643 08:44:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:31.643 08:44:48 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.643 08:44:48 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:31.643 08:44:48 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:31.904 request: 00:10:31.904 { 00:10:31.904 "method": "env_dpdk_get_mem_stats", 00:10:31.904 "req_id": 1 00:10:31.904 } 00:10:31.904 Got JSON-RPC error response 00:10:31.904 response: 00:10:31.904 { 00:10:31.904 "code": -32601, 00:10:31.904 "message": "Method not found" 00:10:31.904 } 00:10:31.904 08:44:48 -- common/autotest_common.sh@641 -- # es=1 00:10:31.904 08:44:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:31.904 08:44:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:31.904 08:44:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:31.904 08:44:48 -- app/cmdline.sh@1 -- # killprocess 1931874 00:10:31.904 08:44:48 -- common/autotest_common.sh@936 -- # '[' -z 1931874 ']' 00:10:31.904 08:44:48 -- common/autotest_common.sh@940 -- # kill -0 1931874 00:10:31.904 08:44:48 -- common/autotest_common.sh@941 -- # uname 00:10:31.904 08:44:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:31.904 08:44:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1931874 00:10:31.904 08:44:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:31.904 08:44:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:31.904 08:44:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1931874' 00:10:31.904 killing process with pid 1931874 00:10:31.904 08:44:48 -- common/autotest_common.sh@955 -- # kill 1931874 00:10:31.904 08:44:48 -- common/autotest_common.sh@960 -- # wait 1931874 00:10:32.163 00:10:32.163 real 0m1.704s 00:10:32.163 user 0m1.947s 00:10:32.163 sys 0m0.489s 00:10:32.163 08:44:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:32.163 08:44:49 -- common/autotest_common.sh@10 -- # set +x 00:10:32.163 ************************************ 00:10:32.163 END TEST app_cmdline 00:10:32.163 ************************************ 00:10:32.163 08:44:49 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:32.163 08:44:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:32.163 08:44:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.163 08:44:49 -- common/autotest_common.sh@10 -- # set +x 00:10:32.421 ************************************ 00:10:32.421 START TEST version 00:10:32.421 ************************************ 00:10:32.421 08:44:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:32.421 * Looking for test storage... 00:10:32.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:32.421 08:44:49 -- app/version.sh@17 -- # get_header_version major 00:10:32.421 08:44:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:32.421 08:44:49 -- app/version.sh@14 -- # cut -f2 00:10:32.421 08:44:49 -- app/version.sh@14 -- # tr -d '"' 00:10:32.421 08:44:49 -- app/version.sh@17 -- # major=24 00:10:32.421 08:44:49 -- app/version.sh@18 -- # get_header_version minor 00:10:32.421 08:44:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:32.421 08:44:49 -- app/version.sh@14 -- # cut -f2 00:10:32.422 08:44:49 -- app/version.sh@14 -- # tr -d '"' 00:10:32.422 08:44:49 -- app/version.sh@18 -- # minor=5 00:10:32.422 08:44:49 -- app/version.sh@19 -- # get_header_version patch 00:10:32.422 08:44:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:32.422 08:44:49 -- app/version.sh@14 -- # cut -f2 00:10:32.422 08:44:49 -- app/version.sh@14 -- # tr -d '"' 00:10:32.422 08:44:49 -- app/version.sh@19 -- # patch=0 00:10:32.422 08:44:49 -- app/version.sh@20 -- # get_header_version suffix 00:10:32.422 08:44:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:32.422 08:44:49 -- app/version.sh@14 -- # cut -f2 00:10:32.422 08:44:49 -- app/version.sh@14 -- # tr -d '"' 00:10:32.422 08:44:49 -- app/version.sh@20 -- # suffix=-pre 00:10:32.422 08:44:49 -- app/version.sh@22 -- # version=24.5 00:10:32.422 08:44:49 -- app/version.sh@25 -- # (( patch != 0 )) 00:10:32.422 08:44:49 -- app/version.sh@28 -- # version=24.5rc0 00:10:32.422 08:44:49 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:32.422 08:44:49 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:32.680 08:44:49 -- app/version.sh@30 -- # py_version=24.5rc0 00:10:32.680 08:44:49 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:10:32.680 00:10:32.680 real 0m0.186s 00:10:32.680 user 0m0.089s 00:10:32.680 sys 0m0.142s 00:10:32.680 08:44:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:32.680 08:44:49 -- common/autotest_common.sh@10 -- # set +x 00:10:32.680 ************************************ 00:10:32.680 END TEST version 00:10:32.680 ************************************ 00:10:32.680 08:44:49 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:10:32.680 08:44:49 -- spdk/autotest.sh@194 -- # uname -s 00:10:32.680 08:44:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:32.680 08:44:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:32.680 08:44:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:32.680 08:44:49 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:32.680 08:44:49 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:10:32.680 08:44:49 -- spdk/autotest.sh@258 -- # timing_exit lib 00:10:32.680 08:44:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:32.680 08:44:49 -- common/autotest_common.sh@10 -- # set +x 00:10:32.680 08:44:49 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:10:32.680 08:44:49 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:10:32.680 08:44:49 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:10:32.680 08:44:49 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:10:32.680 08:44:49 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:10:32.680 08:44:49 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:10:32.680 08:44:49 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:32.680 08:44:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:32.680 08:44:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.680 08:44:49 -- common/autotest_common.sh@10 -- # set +x 00:10:32.939 ************************************ 00:10:32.939 START TEST nvmf_tcp 00:10:32.939 ************************************ 00:10:32.939 08:44:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:32.939 * Looking for test storage... 00:10:32.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:32.939 08:44:50 -- nvmf/nvmf.sh@10 -- # uname -s 00:10:32.939 08:44:50 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:32.939 08:44:50 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.939 08:44:50 -- nvmf/common.sh@7 -- # uname -s 00:10:32.939 08:44:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.939 08:44:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.939 08:44:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.939 08:44:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.939 08:44:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.939 08:44:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.939 08:44:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.939 08:44:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.939 08:44:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.939 08:44:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.939 08:44:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:32.939 08:44:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:32.939 08:44:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.939 08:44:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.939 08:44:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.939 08:44:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.939 08:44:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.939 08:44:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.939 08:44:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.939 08:44:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.939 08:44:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.939 08:44:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.940 08:44:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.940 08:44:50 -- paths/export.sh@5 -- # export PATH 00:10:32.940 08:44:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.940 08:44:50 -- nvmf/common.sh@47 -- # : 0 00:10:32.940 08:44:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.940 08:44:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.940 08:44:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.940 08:44:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.940 08:44:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.940 08:44:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.940 08:44:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.940 08:44:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.940 08:44:50 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:32.940 08:44:50 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:10:32.940 08:44:50 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:10:32.940 08:44:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:32.940 08:44:50 -- common/autotest_common.sh@10 -- # set +x 00:10:32.940 08:44:50 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:10:32.940 08:44:50 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:32.940 08:44:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:32.940 08:44:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.940 08:44:50 -- common/autotest_common.sh@10 -- # set +x 00:10:33.199 ************************************ 00:10:33.199 START TEST nvmf_example 00:10:33.199 ************************************ 00:10:33.199 08:44:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:33.199 * Looking for test storage... 00:10:33.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.199 08:44:50 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.199 08:44:50 -- nvmf/common.sh@7 -- # uname -s 00:10:33.199 08:44:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.199 08:44:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.199 08:44:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.199 08:44:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.199 08:44:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.199 08:44:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.199 08:44:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.199 08:44:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.199 08:44:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.199 08:44:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.199 08:44:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:33.199 08:44:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:33.199 08:44:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.199 08:44:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.199 08:44:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.199 08:44:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.199 08:44:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.199 08:44:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.199 08:44:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.199 08:44:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.199 08:44:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.199 08:44:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.199 08:44:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.199 08:44:50 -- paths/export.sh@5 -- # export PATH 00:10:33.199 08:44:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.199 08:44:50 -- nvmf/common.sh@47 -- # : 0 00:10:33.199 08:44:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.199 08:44:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.199 08:44:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.199 08:44:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.199 08:44:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.199 08:44:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.199 08:44:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.199 08:44:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.199 08:44:50 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:33.199 08:44:50 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:33.199 08:44:50 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:33.199 08:44:50 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:33.199 08:44:50 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:33.199 08:44:50 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:33.199 08:44:50 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:33.199 08:44:50 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:33.199 08:44:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:33.199 08:44:50 -- common/autotest_common.sh@10 -- # set +x 00:10:33.200 08:44:50 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:33.200 08:44:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:33.200 08:44:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.200 08:44:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:33.200 08:44:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:33.200 08:44:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:33.200 08:44:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.200 08:44:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.459 08:44:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.459 08:44:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:33.459 08:44:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:33.459 08:44:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:33.459 08:44:50 -- common/autotest_common.sh@10 -- # set +x 00:10:40.024 08:44:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:40.024 08:44:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:40.024 08:44:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:40.024 08:44:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:40.024 08:44:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:40.024 08:44:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:40.024 08:44:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:40.024 08:44:56 -- nvmf/common.sh@295 -- # net_devs=() 00:10:40.024 08:44:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:40.024 08:44:56 -- nvmf/common.sh@296 -- # e810=() 00:10:40.024 08:44:56 -- nvmf/common.sh@296 -- # local -ga e810 00:10:40.024 08:44:56 -- nvmf/common.sh@297 -- # x722=() 00:10:40.024 08:44:56 -- nvmf/common.sh@297 -- # local -ga x722 00:10:40.024 08:44:56 -- nvmf/common.sh@298 -- # mlx=() 00:10:40.024 08:44:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:40.024 08:44:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.024 08:44:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.024 08:44:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.024 08:44:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.024 08:44:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.025 08:44:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.025 08:44:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.025 08:44:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.025 08:44:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.025 08:44:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.025 08:44:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.025 08:44:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:40.025 08:44:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:40.025 08:44:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:40.025 08:44:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.025 08:44:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:40.025 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:40.025 08:44:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.025 08:44:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:40.025 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:40.025 08:44:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:40.025 08:44:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.025 08:44:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.025 08:44:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:40.025 08:44:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.025 08:44:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:40.025 Found net devices under 0000:af:00.0: cvl_0_0 00:10:40.025 08:44:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.025 08:44:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.025 08:44:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.025 08:44:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:40.025 08:44:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.025 08:44:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:40.025 Found net devices under 0000:af:00.1: cvl_0_1 00:10:40.025 08:44:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.025 08:44:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:40.025 08:44:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:40.025 08:44:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:40.025 08:44:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.025 08:44:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.025 08:44:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.025 08:44:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:40.025 08:44:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.025 08:44:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.025 08:44:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:40.025 08:44:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.025 08:44:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.025 08:44:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:40.025 08:44:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:40.025 08:44:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.025 08:44:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.025 08:44:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.025 08:44:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.025 08:44:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:40.025 08:44:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.025 08:44:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.025 08:44:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.025 08:44:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:40.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:10:40.025 00:10:40.025 --- 10.0.0.2 ping statistics --- 00:10:40.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.025 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:10:40.025 08:44:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:10:40.025 00:10:40.025 --- 10.0.0.1 ping statistics --- 00:10:40.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.025 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:10:40.025 08:44:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.025 08:44:56 -- nvmf/common.sh@411 -- # return 0 00:10:40.025 08:44:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:40.025 08:44:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.025 08:44:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:40.025 08:44:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.025 08:44:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:40.025 08:44:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:40.025 08:44:56 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:40.025 08:44:56 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:40.025 08:44:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:40.025 08:44:56 -- common/autotest_common.sh@10 -- # set +x 00:10:40.025 08:44:56 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:40.025 08:44:56 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:40.025 08:44:56 -- target/nvmf_example.sh@34 -- # nvmfpid=1935691 00:10:40.025 08:44:56 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:40.025 08:44:56 -- target/nvmf_example.sh@36 -- # waitforlisten 1935691 00:10:40.025 08:44:56 -- common/autotest_common.sh@817 -- # '[' -z 1935691 ']' 00:10:40.025 08:44:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.025 08:44:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:40.025 08:44:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.025 08:44:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:40.025 08:44:56 -- common/autotest_common.sh@10 -- # set +x 00:10:40.025 08:44:56 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:40.025 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.592 08:44:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:40.592 08:44:57 -- common/autotest_common.sh@850 -- # return 0 00:10:40.592 08:44:57 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:40.592 08:44:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:40.592 08:44:57 -- common/autotest_common.sh@10 -- # set +x 00:10:40.592 08:44:57 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.592 08:44:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.592 08:44:57 -- common/autotest_common.sh@10 -- # set +x 00:10:40.592 08:44:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.592 08:44:57 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:40.592 08:44:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.592 08:44:57 -- common/autotest_common.sh@10 -- # set +x 00:10:40.592 08:44:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.592 08:44:57 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:40.592 08:44:57 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.592 08:44:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.592 08:44:57 -- common/autotest_common.sh@10 -- # set +x 00:10:40.592 08:44:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.592 08:44:57 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:40.592 08:44:57 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.592 08:44:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.592 08:44:57 -- common/autotest_common.sh@10 -- # set +x 00:10:40.592 08:44:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.592 08:44:57 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.592 08:44:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:40.592 08:44:57 -- common/autotest_common.sh@10 -- # set +x 00:10:40.850 08:44:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:40.850 08:44:57 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:40.850 08:44:57 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:40.850 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.047 Initializing NVMe Controllers 00:10:53.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:53.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:53.047 Initialization complete. Launching workers. 00:10:53.047 ======================================================== 00:10:53.047 Latency(us) 00:10:53.047 Device Information : IOPS MiB/s Average min max 00:10:53.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14330.00 55.98 4466.87 681.00 15451.48 00:10:53.047 ======================================================== 00:10:53.047 Total : 14330.00 55.98 4466.87 681.00 15451.48 00:10:53.047 00:10:53.047 08:45:08 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:53.047 08:45:08 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:53.047 08:45:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:53.047 08:45:08 -- nvmf/common.sh@117 -- # sync 00:10:53.047 08:45:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:53.047 08:45:08 -- nvmf/common.sh@120 -- # set +e 00:10:53.047 08:45:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.047 08:45:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:53.047 rmmod nvme_tcp 00:10:53.047 rmmod nvme_fabrics 00:10:53.047 rmmod nvme_keyring 00:10:53.047 08:45:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:53.047 08:45:08 -- nvmf/common.sh@124 -- # set -e 00:10:53.047 08:45:08 -- nvmf/common.sh@125 -- # return 0 00:10:53.047 08:45:08 -- nvmf/common.sh@478 -- # '[' -n 1935691 ']' 00:10:53.047 08:45:08 -- nvmf/common.sh@479 -- # killprocess 1935691 00:10:53.047 08:45:08 -- common/autotest_common.sh@936 -- # '[' -z 1935691 ']' 00:10:53.047 08:45:08 -- common/autotest_common.sh@940 -- # kill -0 1935691 00:10:53.047 08:45:08 -- common/autotest_common.sh@941 -- # uname 00:10:53.047 08:45:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:53.047 08:45:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1935691 00:10:53.047 08:45:08 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:10:53.047 08:45:08 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:10:53.047 08:45:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1935691' 00:10:53.047 killing process with pid 1935691 00:10:53.047 08:45:08 -- common/autotest_common.sh@955 -- # kill 1935691 00:10:53.047 08:45:08 -- common/autotest_common.sh@960 -- # wait 1935691 00:10:53.047 nvmf threads initialize successfully 00:10:53.047 bdev subsystem init successfully 00:10:53.047 created a nvmf target service 00:10:53.047 create targets's poll groups done 00:10:53.047 all subsystems of target started 00:10:53.047 nvmf target is running 00:10:53.047 all subsystems of target stopped 00:10:53.047 destroy targets's poll groups done 00:10:53.047 destroyed the nvmf target service 00:10:53.047 bdev subsystem finish successfully 00:10:53.048 nvmf threads destroy successfully 00:10:53.048 08:45:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:53.048 08:45:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:53.048 08:45:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:53.048 08:45:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.048 08:45:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:53.048 08:45:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.048 08:45:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.048 08:45:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.305 08:45:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:53.305 08:45:10 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:53.305 08:45:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:53.305 08:45:10 -- common/autotest_common.sh@10 -- # set +x 00:10:53.563 00:10:53.563 real 0m20.269s 00:10:53.563 user 0m45.610s 00:10:53.563 sys 0m7.044s 00:10:53.563 08:45:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:53.563 08:45:10 -- common/autotest_common.sh@10 -- # set +x 00:10:53.563 ************************************ 00:10:53.563 END TEST nvmf_example 00:10:53.563 ************************************ 00:10:53.563 08:45:10 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:53.563 08:45:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:53.563 08:45:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.563 08:45:10 -- common/autotest_common.sh@10 -- # set +x 00:10:53.563 ************************************ 00:10:53.563 START TEST nvmf_filesystem 00:10:53.563 ************************************ 00:10:53.563 08:45:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:53.825 * Looking for test storage... 00:10:53.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.825 08:45:10 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:53.825 08:45:10 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:53.825 08:45:10 -- common/autotest_common.sh@34 -- # set -e 00:10:53.825 08:45:10 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:53.825 08:45:10 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:53.825 08:45:10 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:53.825 08:45:10 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:53.825 08:45:10 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:53.825 08:45:10 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:53.825 08:45:10 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:53.825 08:45:10 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:53.825 08:45:10 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:53.825 08:45:10 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:53.825 08:45:10 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:53.825 08:45:10 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:53.825 08:45:10 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:53.825 08:45:10 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:53.825 08:45:10 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:53.825 08:45:10 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:53.825 08:45:10 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:53.825 08:45:10 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:53.825 08:45:10 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:53.825 08:45:10 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:53.825 08:45:10 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:53.825 08:45:10 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:53.825 08:45:10 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:53.825 08:45:10 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:53.825 08:45:10 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:53.825 08:45:10 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:53.825 08:45:10 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:53.825 08:45:10 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:53.825 08:45:10 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:53.825 08:45:10 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:53.825 08:45:10 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:53.825 08:45:10 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:53.825 08:45:10 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:53.825 08:45:10 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:53.825 08:45:10 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:53.825 08:45:10 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:53.825 08:45:10 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:53.825 08:45:10 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:53.825 08:45:10 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:53.825 08:45:10 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:53.825 08:45:10 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:53.825 08:45:10 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:53.826 08:45:10 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:53.826 08:45:10 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:53.826 08:45:10 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:53.826 08:45:10 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:53.826 08:45:10 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:53.826 08:45:10 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:53.826 08:45:10 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:53.826 08:45:10 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:53.826 08:45:10 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:10:53.826 08:45:10 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:10:53.826 08:45:10 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:53.826 08:45:10 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:10:53.826 08:45:10 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:10:53.826 08:45:10 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:10:53.826 08:45:10 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:10:53.826 08:45:10 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:10:53.826 08:45:10 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:10:53.826 08:45:10 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:10:53.826 08:45:10 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:10:53.826 08:45:10 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:10:53.826 08:45:10 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:10:53.826 08:45:10 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:10:53.826 08:45:10 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:10:53.826 08:45:10 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:10:53.826 08:45:10 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:10:53.826 08:45:10 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:10:53.826 08:45:10 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:10:53.826 08:45:10 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:10:53.826 08:45:10 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:10:53.826 08:45:10 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:10:53.826 08:45:10 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:53.826 08:45:10 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:10:53.826 08:45:10 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:10:53.826 08:45:10 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:10:53.826 08:45:10 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:10:53.826 08:45:10 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:10:53.826 08:45:10 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:10:53.826 08:45:10 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:10:53.826 08:45:10 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:10:53.826 08:45:10 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:10:53.826 08:45:10 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:10:53.826 08:45:10 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:10:53.826 08:45:10 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:53.826 08:45:10 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:10:53.826 08:45:10 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:10:53.826 08:45:10 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:53.826 08:45:10 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:53.826 08:45:10 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:53.826 08:45:10 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:53.826 08:45:10 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:53.826 08:45:10 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:53.826 08:45:10 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:53.826 08:45:10 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:53.826 08:45:10 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:53.826 08:45:10 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:53.826 08:45:10 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:53.826 08:45:10 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:53.826 08:45:10 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:53.826 08:45:10 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:53.826 08:45:10 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:53.826 08:45:10 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:53.826 #define SPDK_CONFIG_H 00:10:53.826 #define SPDK_CONFIG_APPS 1 00:10:53.826 #define SPDK_CONFIG_ARCH native 00:10:53.826 #undef SPDK_CONFIG_ASAN 00:10:53.826 #undef SPDK_CONFIG_AVAHI 00:10:53.826 #undef SPDK_CONFIG_CET 00:10:53.826 #define SPDK_CONFIG_COVERAGE 1 00:10:53.826 #define SPDK_CONFIG_CROSS_PREFIX 00:10:53.826 #undef SPDK_CONFIG_CRYPTO 00:10:53.826 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:53.826 #undef SPDK_CONFIG_CUSTOMOCF 00:10:53.826 #undef SPDK_CONFIG_DAOS 00:10:53.826 #define SPDK_CONFIG_DAOS_DIR 00:10:53.826 #define SPDK_CONFIG_DEBUG 1 00:10:53.826 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:53.826 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:53.826 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:53.826 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:53.826 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:53.826 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:53.826 #define SPDK_CONFIG_EXAMPLES 1 00:10:53.826 #undef SPDK_CONFIG_FC 00:10:53.826 #define SPDK_CONFIG_FC_PATH 00:10:53.826 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:53.826 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:53.826 #undef SPDK_CONFIG_FUSE 00:10:53.826 #undef SPDK_CONFIG_FUZZER 00:10:53.826 #define SPDK_CONFIG_FUZZER_LIB 00:10:53.826 #undef SPDK_CONFIG_GOLANG 00:10:53.826 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:53.826 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:53.826 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:53.826 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:10:53.826 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:53.826 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:53.826 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:53.826 #define SPDK_CONFIG_IDXD 1 00:10:53.826 #undef SPDK_CONFIG_IDXD_KERNEL 00:10:53.826 #undef SPDK_CONFIG_IPSEC_MB 00:10:53.826 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:53.826 #define SPDK_CONFIG_ISAL 1 00:10:53.826 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:53.826 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:53.826 #define SPDK_CONFIG_LIBDIR 00:10:53.826 #undef SPDK_CONFIG_LTO 00:10:53.826 #define SPDK_CONFIG_MAX_LCORES 00:10:53.826 #define SPDK_CONFIG_NVME_CUSE 1 00:10:53.826 #undef SPDK_CONFIG_OCF 00:10:53.826 #define SPDK_CONFIG_OCF_PATH 00:10:53.826 #define SPDK_CONFIG_OPENSSL_PATH 00:10:53.826 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:53.826 #define SPDK_CONFIG_PGO_DIR 00:10:53.826 #undef SPDK_CONFIG_PGO_USE 00:10:53.826 #define SPDK_CONFIG_PREFIX /usr/local 00:10:53.826 #undef SPDK_CONFIG_RAID5F 00:10:53.826 #undef SPDK_CONFIG_RBD 00:10:53.826 #define SPDK_CONFIG_RDMA 1 00:10:53.826 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:53.826 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:53.826 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:53.826 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:53.826 #define SPDK_CONFIG_SHARED 1 00:10:53.826 #undef SPDK_CONFIG_SMA 00:10:53.826 #define SPDK_CONFIG_TESTS 1 00:10:53.826 #undef SPDK_CONFIG_TSAN 00:10:53.826 #define SPDK_CONFIG_UBLK 1 00:10:53.826 #define SPDK_CONFIG_UBSAN 1 00:10:53.826 #undef SPDK_CONFIG_UNIT_TESTS 00:10:53.826 #undef SPDK_CONFIG_URING 00:10:53.826 #define SPDK_CONFIG_URING_PATH 00:10:53.826 #undef SPDK_CONFIG_URING_ZNS 00:10:53.826 #undef SPDK_CONFIG_USDT 00:10:53.826 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:53.826 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:53.826 #define SPDK_CONFIG_VFIO_USER 1 00:10:53.826 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:53.826 #define SPDK_CONFIG_VHOST 1 00:10:53.826 #define SPDK_CONFIG_VIRTIO 1 00:10:53.826 #undef SPDK_CONFIG_VTUNE 00:10:53.826 #define SPDK_CONFIG_VTUNE_DIR 00:10:53.826 #define SPDK_CONFIG_WERROR 1 00:10:53.826 #define SPDK_CONFIG_WPDK_DIR 00:10:53.826 #undef SPDK_CONFIG_XNVME 00:10:53.826 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:53.826 08:45:10 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:53.826 08:45:10 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.826 08:45:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.826 08:45:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.826 08:45:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.826 08:45:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.826 08:45:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.826 08:45:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.826 08:45:10 -- paths/export.sh@5 -- # export PATH 00:10:53.827 08:45:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.827 08:45:10 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:53.827 08:45:10 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:53.827 08:45:10 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:53.827 08:45:10 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:53.827 08:45:10 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:53.827 08:45:10 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:53.827 08:45:10 -- pm/common@67 -- # TEST_TAG=N/A 00:10:53.827 08:45:10 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:53.827 08:45:10 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:53.827 08:45:10 -- pm/common@71 -- # uname -s 00:10:53.827 08:45:10 -- pm/common@71 -- # PM_OS=Linux 00:10:53.827 08:45:10 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:53.827 08:45:10 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:10:53.827 08:45:10 -- pm/common@76 -- # [[ Linux == Linux ]] 00:10:53.827 08:45:10 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:10:53.827 08:45:10 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:10:53.827 08:45:10 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:53.827 08:45:10 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:53.827 08:45:10 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:10:53.827 08:45:10 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:10:53.827 08:45:10 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:53.827 08:45:10 -- common/autotest_common.sh@57 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:10:53.827 08:45:10 -- common/autotest_common.sh@61 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:53.827 08:45:10 -- common/autotest_common.sh@63 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:10:53.827 08:45:10 -- common/autotest_common.sh@65 -- # : 1 00:10:53.827 08:45:10 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:53.827 08:45:10 -- common/autotest_common.sh@67 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:10:53.827 08:45:10 -- common/autotest_common.sh@69 -- # : 00:10:53.827 08:45:10 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:10:53.827 08:45:10 -- common/autotest_common.sh@71 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:10:53.827 08:45:10 -- common/autotest_common.sh@73 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:10:53.827 08:45:10 -- common/autotest_common.sh@75 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:10:53.827 08:45:10 -- common/autotest_common.sh@77 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:53.827 08:45:10 -- common/autotest_common.sh@79 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:10:53.827 08:45:10 -- common/autotest_common.sh@81 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:10:53.827 08:45:10 -- common/autotest_common.sh@83 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:10:53.827 08:45:10 -- common/autotest_common.sh@85 -- # : 1 00:10:53.827 08:45:10 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:10:53.827 08:45:10 -- common/autotest_common.sh@87 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:10:53.827 08:45:10 -- common/autotest_common.sh@89 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:10:53.827 08:45:10 -- common/autotest_common.sh@91 -- # : 1 00:10:53.827 08:45:10 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:10:53.827 08:45:10 -- common/autotest_common.sh@93 -- # : 1 00:10:53.827 08:45:10 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:10:53.827 08:45:10 -- common/autotest_common.sh@95 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:53.827 08:45:10 -- common/autotest_common.sh@97 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:10:53.827 08:45:10 -- common/autotest_common.sh@99 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:10:53.827 08:45:10 -- common/autotest_common.sh@101 -- # : tcp 00:10:53.827 08:45:10 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:53.827 08:45:10 -- common/autotest_common.sh@103 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:10:53.827 08:45:10 -- common/autotest_common.sh@105 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:10:53.827 08:45:10 -- common/autotest_common.sh@107 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:10:53.827 08:45:10 -- common/autotest_common.sh@109 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:10:53.827 08:45:10 -- common/autotest_common.sh@111 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:10:53.827 08:45:10 -- common/autotest_common.sh@113 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:10:53.827 08:45:10 -- common/autotest_common.sh@115 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:10:53.827 08:45:10 -- common/autotest_common.sh@117 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:53.827 08:45:10 -- common/autotest_common.sh@119 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:10:53.827 08:45:10 -- common/autotest_common.sh@121 -- # : 1 00:10:53.827 08:45:10 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:10:53.827 08:45:10 -- common/autotest_common.sh@123 -- # : 00:10:53.827 08:45:10 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:53.827 08:45:10 -- common/autotest_common.sh@125 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:10:53.827 08:45:10 -- common/autotest_common.sh@127 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:10:53.827 08:45:10 -- common/autotest_common.sh@129 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:10:53.827 08:45:10 -- common/autotest_common.sh@131 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:10:53.827 08:45:10 -- common/autotest_common.sh@133 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:10:53.827 08:45:10 -- common/autotest_common.sh@135 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:10:53.827 08:45:10 -- common/autotest_common.sh@137 -- # : 00:10:53.827 08:45:10 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:10:53.827 08:45:10 -- common/autotest_common.sh@139 -- # : true 00:10:53.827 08:45:10 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:10:53.827 08:45:10 -- common/autotest_common.sh@141 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:10:53.827 08:45:10 -- common/autotest_common.sh@143 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:10:53.827 08:45:10 -- common/autotest_common.sh@145 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:10:53.827 08:45:10 -- common/autotest_common.sh@147 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:10:53.827 08:45:10 -- common/autotest_common.sh@149 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:10:53.827 08:45:10 -- common/autotest_common.sh@151 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:10:53.827 08:45:10 -- common/autotest_common.sh@153 -- # : e810 00:10:53.827 08:45:10 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:10:53.827 08:45:10 -- common/autotest_common.sh@155 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:10:53.827 08:45:10 -- common/autotest_common.sh@157 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:10:53.827 08:45:10 -- common/autotest_common.sh@159 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:10:53.827 08:45:10 -- common/autotest_common.sh@161 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:10:53.827 08:45:10 -- common/autotest_common.sh@163 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:10:53.827 08:45:10 -- common/autotest_common.sh@166 -- # : 00:10:53.827 08:45:10 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:10:53.827 08:45:10 -- common/autotest_common.sh@168 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:10:53.827 08:45:10 -- common/autotest_common.sh@170 -- # : 0 00:10:53.827 08:45:10 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:53.827 08:45:10 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:53.827 08:45:10 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:53.827 08:45:10 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:53.827 08:45:10 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:53.827 08:45:10 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:53.827 08:45:10 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:53.828 08:45:10 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:53.828 08:45:10 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:53.828 08:45:10 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:53.828 08:45:10 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:53.828 08:45:10 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:53.828 08:45:10 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:53.828 08:45:10 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:53.828 08:45:10 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:10:53.828 08:45:10 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:53.828 08:45:10 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:53.828 08:45:10 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:53.828 08:45:10 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:53.828 08:45:10 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:53.828 08:45:10 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:10:53.828 08:45:10 -- common/autotest_common.sh@199 -- # cat 00:10:53.828 08:45:10 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:10:53.828 08:45:10 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:53.828 08:45:10 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:53.828 08:45:10 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:53.828 08:45:10 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:53.828 08:45:10 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:10:53.828 08:45:10 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:10:53.828 08:45:10 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:53.828 08:45:10 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:53.828 08:45:10 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:53.828 08:45:10 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:53.828 08:45:10 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:53.828 08:45:10 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:53.828 08:45:10 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:53.828 08:45:10 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:53.828 08:45:10 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:53.828 08:45:10 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:53.828 08:45:10 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:53.828 08:45:10 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:53.828 08:45:10 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:10:53.828 08:45:10 -- common/autotest_common.sh@252 -- # export valgrind= 00:10:53.828 08:45:10 -- common/autotest_common.sh@252 -- # valgrind= 00:10:53.828 08:45:10 -- common/autotest_common.sh@258 -- # uname -s 00:10:53.828 08:45:10 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:10:53.828 08:45:10 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:10:53.828 08:45:10 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:10:53.828 08:45:10 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:10:53.828 08:45:10 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:10:53.828 08:45:10 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:10:53.828 08:45:10 -- common/autotest_common.sh@268 -- # MAKE=make 00:10:53.828 08:45:10 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j112 00:10:53.828 08:45:10 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:10:53.828 08:45:10 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:10:53.828 08:45:10 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:10:53.828 08:45:10 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:10:53.828 08:45:10 -- common/autotest_common.sh@289 -- # for i in "$@" 00:10:53.828 08:45:10 -- common/autotest_common.sh@290 -- # case "$i" in 00:10:53.828 08:45:10 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:10:53.828 08:45:10 -- common/autotest_common.sh@307 -- # [[ -z 1938769 ]] 00:10:53.828 08:45:10 -- common/autotest_common.sh@307 -- # kill -0 1938769 00:10:53.828 08:45:11 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:53.828 08:45:11 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:10:53.828 08:45:11 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:10:53.828 08:45:11 -- common/autotest_common.sh@320 -- # local mount target_dir 00:10:53.828 08:45:11 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:10:53.828 08:45:11 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:10:53.828 08:45:11 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:10:53.828 08:45:11 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:10:53.828 08:45:11 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.uRZlXr 00:10:53.828 08:45:11 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:53.828 08:45:11 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:10:53.828 08:45:11 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:10:53.828 08:45:11 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.uRZlXr/tests/target /tmp/spdk.uRZlXr 00:10:53.828 08:45:11 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:10:53.828 08:45:11 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:53.828 08:45:11 -- common/autotest_common.sh@316 -- # df -T 00:10:53.828 08:45:11 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:10:53.828 08:45:11 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:10:53.828 08:45:11 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # avails["$mount"]=995438592 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:10:53.828 08:45:11 -- common/autotest_common.sh@352 -- # uses["$mount"]=4288991232 00:10:53.828 08:45:11 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # avails["$mount"]=52314771456 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # sizes["$mount"]=61742301184 00:10:53.828 08:45:11 -- common/autotest_common.sh@352 -- # uses["$mount"]=9427529728 00:10:53.828 08:45:11 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # avails["$mount"]=30817611776 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30871150592 00:10:53.828 08:45:11 -- common/autotest_common.sh@352 -- # uses["$mount"]=53538816 00:10:53.828 08:45:11 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # avails["$mount"]=12339077120 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12348461056 00:10:53.828 08:45:11 -- common/autotest_common.sh@352 -- # uses["$mount"]=9383936 00:10:53.828 08:45:11 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # avails["$mount"]=30870286336 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # sizes["$mount"]=30871150592 00:10:53.828 08:45:11 -- common/autotest_common.sh@352 -- # uses["$mount"]=864256 00:10:53.828 08:45:11 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:53.828 08:45:11 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # avails["$mount"]=6174224384 00:10:53.828 08:45:11 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6174228480 00:10:53.828 08:45:11 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:10:53.828 08:45:11 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:53.828 08:45:11 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:10:53.828 * Looking for test storage... 00:10:53.828 08:45:11 -- common/autotest_common.sh@357 -- # local target_space new_size 00:10:53.829 08:45:11 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:10:53.829 08:45:11 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.829 08:45:11 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:53.829 08:45:11 -- common/autotest_common.sh@361 -- # mount=/ 00:10:53.829 08:45:11 -- common/autotest_common.sh@363 -- # target_space=52314771456 00:10:53.829 08:45:11 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:10:53.829 08:45:11 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:10:53.829 08:45:11 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:10:53.829 08:45:11 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:10:53.829 08:45:11 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:10:53.829 08:45:11 -- common/autotest_common.sh@370 -- # new_size=11642122240 00:10:53.829 08:45:11 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:53.829 08:45:11 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.829 08:45:11 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.829 08:45:11 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.829 08:45:11 -- common/autotest_common.sh@378 -- # return 0 00:10:53.829 08:45:11 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:53.829 08:45:11 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:53.829 08:45:11 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:53.829 08:45:11 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:53.829 08:45:11 -- common/autotest_common.sh@1673 -- # true 00:10:53.829 08:45:11 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:53.829 08:45:11 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:53.829 08:45:11 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:53.829 08:45:11 -- common/autotest_common.sh@27 -- # exec 00:10:53.829 08:45:11 -- common/autotest_common.sh@29 -- # exec 00:10:53.829 08:45:11 -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:53.829 08:45:11 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:53.829 08:45:11 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:53.829 08:45:11 -- common/autotest_common.sh@18 -- # set -x 00:10:53.829 08:45:11 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.829 08:45:11 -- nvmf/common.sh@7 -- # uname -s 00:10:53.829 08:45:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.829 08:45:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.829 08:45:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.829 08:45:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.829 08:45:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.829 08:45:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.829 08:45:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.829 08:45:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.829 08:45:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.829 08:45:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.088 08:45:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:54.088 08:45:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:54.088 08:45:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.088 08:45:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.088 08:45:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.088 08:45:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.088 08:45:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.088 08:45:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.088 08:45:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.088 08:45:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.088 08:45:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.088 08:45:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.088 08:45:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.088 08:45:11 -- paths/export.sh@5 -- # export PATH 00:10:54.088 08:45:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.088 08:45:11 -- nvmf/common.sh@47 -- # : 0 00:10:54.088 08:45:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.088 08:45:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.088 08:45:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.088 08:45:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.088 08:45:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.088 08:45:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.088 08:45:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.088 08:45:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.088 08:45:11 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:54.088 08:45:11 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:54.088 08:45:11 -- target/filesystem.sh@15 -- # nvmftestinit 00:10:54.088 08:45:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:54.088 08:45:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.088 08:45:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:54.088 08:45:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:54.088 08:45:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:54.088 08:45:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.088 08:45:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:54.088 08:45:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.088 08:45:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:54.088 08:45:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:54.088 08:45:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:54.088 08:45:11 -- common/autotest_common.sh@10 -- # set +x 00:11:00.657 08:45:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:00.657 08:45:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.657 08:45:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.657 08:45:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.657 08:45:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.657 08:45:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.657 08:45:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.657 08:45:17 -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.657 08:45:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.657 08:45:17 -- nvmf/common.sh@296 -- # e810=() 00:11:00.657 08:45:17 -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.657 08:45:17 -- nvmf/common.sh@297 -- # x722=() 00:11:00.657 08:45:17 -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.657 08:45:17 -- nvmf/common.sh@298 -- # mlx=() 00:11:00.657 08:45:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.657 08:45:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.657 08:45:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.657 08:45:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.657 08:45:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.657 08:45:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.657 08:45:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.657 08:45:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.657 08:45:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.657 08:45:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.657 08:45:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.657 08:45:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.657 08:45:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.657 08:45:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:00.657 08:45:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.657 08:45:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.657 08:45:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:00.657 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:00.657 08:45:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.657 08:45:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:00.657 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:00.657 08:45:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.657 08:45:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.657 08:45:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.657 08:45:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:00.657 08:45:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.657 08:45:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:00.657 Found net devices under 0000:af:00.0: cvl_0_0 00:11:00.657 08:45:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.657 08:45:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.657 08:45:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.657 08:45:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:00.657 08:45:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.657 08:45:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:00.657 Found net devices under 0000:af:00.1: cvl_0_1 00:11:00.657 08:45:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.657 08:45:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:00.657 08:45:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:00.657 08:45:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:00.657 08:45:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:00.657 08:45:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.657 08:45:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.657 08:45:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.657 08:45:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:00.657 08:45:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.657 08:45:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.657 08:45:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:00.657 08:45:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.657 08:45:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.657 08:45:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:00.657 08:45:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:00.657 08:45:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.657 08:45:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.657 08:45:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.916 08:45:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.916 08:45:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:00.916 08:45:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.916 08:45:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.916 08:45:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.916 08:45:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:00.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:11:00.916 00:11:00.916 --- 10.0.0.2 ping statistics --- 00:11:00.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.916 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:11:00.916 08:45:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:11:00.916 00:11:00.916 --- 10.0.0.1 ping statistics --- 00:11:00.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.916 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:11:00.916 08:45:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.916 08:45:18 -- nvmf/common.sh@411 -- # return 0 00:11:00.916 08:45:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:00.916 08:45:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.916 08:45:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:00.916 08:45:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:00.916 08:45:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.916 08:45:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:00.916 08:45:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:00.916 08:45:18 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:00.916 08:45:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:00.916 08:45:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.916 08:45:18 -- common/autotest_common.sh@10 -- # set +x 00:11:01.176 ************************************ 00:11:01.176 START TEST nvmf_filesystem_no_in_capsule 00:11:01.176 ************************************ 00:11:01.176 08:45:18 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:11:01.176 08:45:18 -- target/filesystem.sh@47 -- # in_capsule=0 00:11:01.176 08:45:18 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:01.176 08:45:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:01.176 08:45:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:01.176 08:45:18 -- common/autotest_common.sh@10 -- # set +x 00:11:01.176 08:45:18 -- nvmf/common.sh@470 -- # nvmfpid=1942160 00:11:01.176 08:45:18 -- nvmf/common.sh@471 -- # waitforlisten 1942160 00:11:01.176 08:45:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.176 08:45:18 -- common/autotest_common.sh@817 -- # '[' -z 1942160 ']' 00:11:01.176 08:45:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.176 08:45:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:01.176 08:45:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.176 08:45:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:01.176 08:45:18 -- common/autotest_common.sh@10 -- # set +x 00:11:01.176 [2024-04-26 08:45:18.351123] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:11:01.176 [2024-04-26 08:45:18.351162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.176 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.436 [2024-04-26 08:45:18.423024] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.436 [2024-04-26 08:45:18.495911] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.436 [2024-04-26 08:45:18.495953] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.436 [2024-04-26 08:45:18.495963] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.436 [2024-04-26 08:45:18.495972] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.436 [2024-04-26 08:45:18.495978] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.436 [2024-04-26 08:45:18.496041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.436 [2024-04-26 08:45:18.496136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.436 [2024-04-26 08:45:18.496222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.436 [2024-04-26 08:45:18.496223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.003 08:45:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:02.003 08:45:19 -- common/autotest_common.sh@850 -- # return 0 00:11:02.003 08:45:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:02.003 08:45:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:02.003 08:45:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.003 08:45:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.003 08:45:19 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:02.003 08:45:19 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:02.003 08:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.003 08:45:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.003 [2024-04-26 08:45:19.204378] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.003 08:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.003 08:45:19 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:02.003 08:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.003 08:45:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.262 Malloc1 00:11:02.262 08:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.262 08:45:19 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.262 08:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.262 08:45:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.262 08:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.262 08:45:19 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:02.262 08:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.262 08:45:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.262 08:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.262 08:45:19 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.262 08:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.262 08:45:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.262 [2024-04-26 08:45:19.356753] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.262 08:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.262 08:45:19 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:02.262 08:45:19 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:11:02.262 08:45:19 -- common/autotest_common.sh@1365 -- # local bdev_info 00:11:02.262 08:45:19 -- common/autotest_common.sh@1366 -- # local bs 00:11:02.262 08:45:19 -- common/autotest_common.sh@1367 -- # local nb 00:11:02.262 08:45:19 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:02.262 08:45:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.262 08:45:19 -- common/autotest_common.sh@10 -- # set +x 00:11:02.262 08:45:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.262 08:45:19 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:11:02.262 { 00:11:02.262 "name": "Malloc1", 00:11:02.262 "aliases": [ 00:11:02.262 "932cadc6-ae85-4915-bd9b-3338beb9fcb5" 00:11:02.262 ], 00:11:02.262 "product_name": "Malloc disk", 00:11:02.262 "block_size": 512, 00:11:02.262 "num_blocks": 1048576, 00:11:02.262 "uuid": "932cadc6-ae85-4915-bd9b-3338beb9fcb5", 00:11:02.262 "assigned_rate_limits": { 00:11:02.262 "rw_ios_per_sec": 0, 00:11:02.262 "rw_mbytes_per_sec": 0, 00:11:02.262 "r_mbytes_per_sec": 0, 00:11:02.262 "w_mbytes_per_sec": 0 00:11:02.262 }, 00:11:02.262 "claimed": true, 00:11:02.262 "claim_type": "exclusive_write", 00:11:02.262 "zoned": false, 00:11:02.262 "supported_io_types": { 00:11:02.262 "read": true, 00:11:02.262 "write": true, 00:11:02.262 "unmap": true, 00:11:02.262 "write_zeroes": true, 00:11:02.262 "flush": true, 00:11:02.262 "reset": true, 00:11:02.262 "compare": false, 00:11:02.262 "compare_and_write": false, 00:11:02.262 "abort": true, 00:11:02.262 "nvme_admin": false, 00:11:02.262 "nvme_io": false 00:11:02.262 }, 00:11:02.262 "memory_domains": [ 00:11:02.262 { 00:11:02.262 "dma_device_id": "system", 00:11:02.262 "dma_device_type": 1 00:11:02.262 }, 00:11:02.262 { 00:11:02.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.262 "dma_device_type": 2 00:11:02.262 } 00:11:02.262 ], 00:11:02.262 "driver_specific": {} 00:11:02.262 } 00:11:02.262 ]' 00:11:02.262 08:45:19 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:11:02.262 08:45:19 -- common/autotest_common.sh@1369 -- # bs=512 00:11:02.262 08:45:19 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:11:02.262 08:45:19 -- common/autotest_common.sh@1370 -- # nb=1048576 00:11:02.262 08:45:19 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:11:02.262 08:45:19 -- common/autotest_common.sh@1374 -- # echo 512 00:11:02.262 08:45:19 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:02.263 08:45:19 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.637 08:45:20 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.637 08:45:20 -- common/autotest_common.sh@1184 -- # local i=0 00:11:03.637 08:45:20 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.637 08:45:20 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:03.637 08:45:20 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:06.166 08:45:22 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:06.166 08:45:22 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:06.166 08:45:22 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.166 08:45:22 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:06.166 08:45:22 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.166 08:45:22 -- common/autotest_common.sh@1194 -- # return 0 00:11:06.166 08:45:22 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:06.166 08:45:22 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:06.166 08:45:22 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:06.166 08:45:22 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:06.166 08:45:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:06.166 08:45:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:06.166 08:45:22 -- setup/common.sh@80 -- # echo 536870912 00:11:06.166 08:45:22 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:06.166 08:45:22 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:06.166 08:45:22 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:06.166 08:45:22 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:06.166 08:45:22 -- target/filesystem.sh@69 -- # partprobe 00:11:06.166 08:45:23 -- target/filesystem.sh@70 -- # sleep 1 00:11:07.102 08:45:24 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:07.102 08:45:24 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:07.102 08:45:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:07.102 08:45:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:07.102 08:45:24 -- common/autotest_common.sh@10 -- # set +x 00:11:07.102 ************************************ 00:11:07.102 START TEST filesystem_ext4 00:11:07.102 ************************************ 00:11:07.102 08:45:24 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:07.102 08:45:24 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:07.102 08:45:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.102 08:45:24 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:07.102 08:45:24 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:11:07.102 08:45:24 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:07.102 08:45:24 -- common/autotest_common.sh@914 -- # local i=0 00:11:07.102 08:45:24 -- common/autotest_common.sh@915 -- # local force 00:11:07.102 08:45:24 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:11:07.102 08:45:24 -- common/autotest_common.sh@918 -- # force=-F 00:11:07.102 08:45:24 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:07.102 mke2fs 1.46.5 (30-Dec-2021) 00:11:07.361 Discarding device blocks: 0/522240 done 00:11:07.361 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:07.361 Filesystem UUID: 5a08f28d-9627-4bab-8050-3811bdd7955e 00:11:07.361 Superblock backups stored on blocks: 00:11:07.361 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:07.361 00:11:07.361 Allocating group tables: 0/64 done 00:11:07.361 Writing inode tables: 0/64 done 00:11:07.361 Creating journal (8192 blocks): done 00:11:07.361 Writing superblocks and filesystem accounting information: 0/64 done 00:11:07.361 00:11:07.361 08:45:24 -- common/autotest_common.sh@931 -- # return 0 00:11:07.361 08:45:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.295 08:45:25 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.295 08:45:25 -- target/filesystem.sh@25 -- # sync 00:11:08.295 08:45:25 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.295 08:45:25 -- target/filesystem.sh@27 -- # sync 00:11:08.295 08:45:25 -- target/filesystem.sh@29 -- # i=0 00:11:08.295 08:45:25 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:08.295 08:45:25 -- target/filesystem.sh@37 -- # kill -0 1942160 00:11:08.295 08:45:25 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:08.295 08:45:25 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:08.295 08:45:25 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:08.295 08:45:25 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:08.295 00:11:08.295 real 0m1.095s 00:11:08.295 user 0m0.033s 00:11:08.295 sys 0m0.074s 00:11:08.295 08:45:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:08.295 08:45:25 -- common/autotest_common.sh@10 -- # set +x 00:11:08.295 ************************************ 00:11:08.295 END TEST filesystem_ext4 00:11:08.295 ************************************ 00:11:08.295 08:45:25 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:08.295 08:45:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:08.295 08:45:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:08.295 08:45:25 -- common/autotest_common.sh@10 -- # set +x 00:11:08.553 ************************************ 00:11:08.553 START TEST filesystem_btrfs 00:11:08.553 ************************************ 00:11:08.553 08:45:25 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:08.553 08:45:25 -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:08.553 08:45:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:08.553 08:45:25 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:08.553 08:45:25 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:11:08.553 08:45:25 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:08.553 08:45:25 -- common/autotest_common.sh@914 -- # local i=0 00:11:08.553 08:45:25 -- common/autotest_common.sh@915 -- # local force 00:11:08.553 08:45:25 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:11:08.553 08:45:25 -- common/autotest_common.sh@920 -- # force=-f 00:11:08.553 08:45:25 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:08.811 btrfs-progs v6.6.2 00:11:08.811 See https://btrfs.readthedocs.io for more information. 00:11:08.811 00:11:08.811 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:08.811 NOTE: several default settings have changed in version 5.15, please make sure 00:11:08.811 this does not affect your deployments: 00:11:08.811 - DUP for metadata (-m dup) 00:11:08.811 - enabled no-holes (-O no-holes) 00:11:08.811 - enabled free-space-tree (-R free-space-tree) 00:11:08.811 00:11:08.811 Label: (null) 00:11:08.811 UUID: 3300426b-de01-41c2-b761-0dc939333394 00:11:08.812 Node size: 16384 00:11:08.812 Sector size: 4096 00:11:08.812 Filesystem size: 510.00MiB 00:11:08.812 Block group profiles: 00:11:08.812 Data: single 8.00MiB 00:11:08.812 Metadata: DUP 32.00MiB 00:11:08.812 System: DUP 8.00MiB 00:11:08.812 SSD detected: yes 00:11:08.812 Zoned device: no 00:11:08.812 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:08.812 Runtime features: free-space-tree 00:11:08.812 Checksum: crc32c 00:11:08.812 Number of devices: 1 00:11:08.812 Devices: 00:11:08.812 ID SIZE PATH 00:11:08.812 1 510.00MiB /dev/nvme0n1p1 00:11:08.812 00:11:08.812 08:45:25 -- common/autotest_common.sh@931 -- # return 0 00:11:08.812 08:45:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.746 08:45:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.746 08:45:26 -- target/filesystem.sh@25 -- # sync 00:11:09.746 08:45:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.746 08:45:26 -- target/filesystem.sh@27 -- # sync 00:11:09.746 08:45:26 -- target/filesystem.sh@29 -- # i=0 00:11:09.746 08:45:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.746 08:45:26 -- target/filesystem.sh@37 -- # kill -0 1942160 00:11:09.746 08:45:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.746 08:45:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.746 08:45:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.746 08:45:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.746 00:11:09.746 real 0m1.316s 00:11:09.746 user 0m0.032s 00:11:09.746 sys 0m0.139s 00:11:09.746 08:45:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:09.746 08:45:26 -- common/autotest_common.sh@10 -- # set +x 00:11:09.746 ************************************ 00:11:09.746 END TEST filesystem_btrfs 00:11:09.746 ************************************ 00:11:09.746 08:45:26 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:09.746 08:45:26 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:09.746 08:45:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:09.746 08:45:26 -- common/autotest_common.sh@10 -- # set +x 00:11:10.005 ************************************ 00:11:10.005 START TEST filesystem_xfs 00:11:10.005 ************************************ 00:11:10.005 08:45:27 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:11:10.005 08:45:27 -- target/filesystem.sh@18 -- # fstype=xfs 00:11:10.005 08:45:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.005 08:45:27 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:10.005 08:45:27 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:11:10.005 08:45:27 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:10.005 08:45:27 -- common/autotest_common.sh@914 -- # local i=0 00:11:10.005 08:45:27 -- common/autotest_common.sh@915 -- # local force 00:11:10.005 08:45:27 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:11:10.005 08:45:27 -- common/autotest_common.sh@920 -- # force=-f 00:11:10.005 08:45:27 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:10.005 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:10.005 = sectsz=512 attr=2, projid32bit=1 00:11:10.005 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:10.005 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:10.005 data = bsize=4096 blocks=130560, imaxpct=25 00:11:10.005 = sunit=0 swidth=0 blks 00:11:10.005 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:10.005 log =internal log bsize=4096 blocks=16384, version=2 00:11:10.005 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:10.005 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:11.381 Discarding blocks...Done. 00:11:11.381 08:45:28 -- common/autotest_common.sh@931 -- # return 0 00:11:11.381 08:45:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.286 08:45:30 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.286 08:45:30 -- target/filesystem.sh@25 -- # sync 00:11:13.286 08:45:30 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.286 08:45:30 -- target/filesystem.sh@27 -- # sync 00:11:13.286 08:45:30 -- target/filesystem.sh@29 -- # i=0 00:11:13.286 08:45:30 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.286 08:45:30 -- target/filesystem.sh@37 -- # kill -0 1942160 00:11:13.286 08:45:30 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.286 08:45:30 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.286 08:45:30 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.286 08:45:30 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.286 00:11:13.286 real 0m3.048s 00:11:13.286 user 0m0.029s 00:11:13.286 sys 0m0.086s 00:11:13.286 08:45:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:13.286 08:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.286 ************************************ 00:11:13.286 END TEST filesystem_xfs 00:11:13.286 ************************************ 00:11:13.286 08:45:30 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:13.286 08:45:30 -- target/filesystem.sh@93 -- # sync 00:11:13.286 08:45:30 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.286 08:45:30 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.286 08:45:30 -- common/autotest_common.sh@1205 -- # local i=0 00:11:13.286 08:45:30 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:13.286 08:45:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.286 08:45:30 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:13.286 08:45:30 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.286 08:45:30 -- common/autotest_common.sh@1217 -- # return 0 00:11:13.286 08:45:30 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.286 08:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.286 08:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.286 08:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:13.286 08:45:30 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:13.286 08:45:30 -- target/filesystem.sh@101 -- # killprocess 1942160 00:11:13.286 08:45:30 -- common/autotest_common.sh@936 -- # '[' -z 1942160 ']' 00:11:13.286 08:45:30 -- common/autotest_common.sh@940 -- # kill -0 1942160 00:11:13.286 08:45:30 -- common/autotest_common.sh@941 -- # uname 00:11:13.286 08:45:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:13.286 08:45:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1942160 00:11:13.286 08:45:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:13.286 08:45:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:13.286 08:45:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1942160' 00:11:13.286 killing process with pid 1942160 00:11:13.286 08:45:30 -- common/autotest_common.sh@955 -- # kill 1942160 00:11:13.286 08:45:30 -- common/autotest_common.sh@960 -- # wait 1942160 00:11:13.853 08:45:30 -- target/filesystem.sh@102 -- # nvmfpid= 00:11:13.853 00:11:13.853 real 0m12.505s 00:11:13.853 user 0m48.795s 00:11:13.853 sys 0m1.977s 00:11:13.853 08:45:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:13.853 08:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.853 ************************************ 00:11:13.853 END TEST nvmf_filesystem_no_in_capsule 00:11:13.853 ************************************ 00:11:13.853 08:45:30 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:13.853 08:45:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:13.853 08:45:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:13.853 08:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:13.853 ************************************ 00:11:13.853 START TEST nvmf_filesystem_in_capsule 00:11:13.853 ************************************ 00:11:13.853 08:45:31 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:11:13.853 08:45:31 -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:13.853 08:45:31 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:13.853 08:45:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:13.853 08:45:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:13.853 08:45:31 -- common/autotest_common.sh@10 -- # set +x 00:11:13.853 08:45:31 -- nvmf/common.sh@470 -- # nvmfpid=1944541 00:11:13.853 08:45:31 -- nvmf/common.sh@471 -- # waitforlisten 1944541 00:11:13.853 08:45:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.853 08:45:31 -- common/autotest_common.sh@817 -- # '[' -z 1944541 ']' 00:11:13.853 08:45:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.853 08:45:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:13.853 08:45:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.853 08:45:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:13.853 08:45:31 -- common/autotest_common.sh@10 -- # set +x 00:11:13.853 [2024-04-26 08:45:31.080966] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:11:13.853 [2024-04-26 08:45:31.081011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.112 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.112 [2024-04-26 08:45:31.155471] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.112 [2024-04-26 08:45:31.222714] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.112 [2024-04-26 08:45:31.222757] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.112 [2024-04-26 08:45:31.222766] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.112 [2024-04-26 08:45:31.222776] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.112 [2024-04-26 08:45:31.222799] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.112 [2024-04-26 08:45:31.222849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.112 [2024-04-26 08:45:31.222941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.112 [2024-04-26 08:45:31.223027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.112 [2024-04-26 08:45:31.223029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.677 08:45:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:14.677 08:45:31 -- common/autotest_common.sh@850 -- # return 0 00:11:14.677 08:45:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:14.677 08:45:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:14.677 08:45:31 -- common/autotest_common.sh@10 -- # set +x 00:11:14.936 08:45:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.936 08:45:31 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:14.936 08:45:31 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:14.936 08:45:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.936 08:45:31 -- common/autotest_common.sh@10 -- # set +x 00:11:14.936 [2024-04-26 08:45:31.930322] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.936 08:45:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.936 08:45:31 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:14.936 08:45:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.936 08:45:31 -- common/autotest_common.sh@10 -- # set +x 00:11:14.936 Malloc1 00:11:14.936 08:45:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.936 08:45:32 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.936 08:45:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.936 08:45:32 -- common/autotest_common.sh@10 -- # set +x 00:11:14.936 08:45:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.936 08:45:32 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.936 08:45:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.936 08:45:32 -- common/autotest_common.sh@10 -- # set +x 00:11:14.936 08:45:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.936 08:45:32 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.936 08:45:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.936 08:45:32 -- common/autotest_common.sh@10 -- # set +x 00:11:14.936 [2024-04-26 08:45:32.086374] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.936 08:45:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.936 08:45:32 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:14.936 08:45:32 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:11:14.936 08:45:32 -- common/autotest_common.sh@1365 -- # local bdev_info 00:11:14.936 08:45:32 -- common/autotest_common.sh@1366 -- # local bs 00:11:14.936 08:45:32 -- common/autotest_common.sh@1367 -- # local nb 00:11:14.936 08:45:32 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:14.936 08:45:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.936 08:45:32 -- common/autotest_common.sh@10 -- # set +x 00:11:14.936 08:45:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.936 08:45:32 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:11:14.936 { 00:11:14.936 "name": "Malloc1", 00:11:14.936 "aliases": [ 00:11:14.936 "72b81357-f79d-4dbb-b55f-7577be25e790" 00:11:14.936 ], 00:11:14.936 "product_name": "Malloc disk", 00:11:14.936 "block_size": 512, 00:11:14.936 "num_blocks": 1048576, 00:11:14.936 "uuid": "72b81357-f79d-4dbb-b55f-7577be25e790", 00:11:14.936 "assigned_rate_limits": { 00:11:14.936 "rw_ios_per_sec": 0, 00:11:14.936 "rw_mbytes_per_sec": 0, 00:11:14.936 "r_mbytes_per_sec": 0, 00:11:14.936 "w_mbytes_per_sec": 0 00:11:14.936 }, 00:11:14.936 "claimed": true, 00:11:14.936 "claim_type": "exclusive_write", 00:11:14.936 "zoned": false, 00:11:14.936 "supported_io_types": { 00:11:14.936 "read": true, 00:11:14.936 "write": true, 00:11:14.936 "unmap": true, 00:11:14.936 "write_zeroes": true, 00:11:14.936 "flush": true, 00:11:14.936 "reset": true, 00:11:14.936 "compare": false, 00:11:14.936 "compare_and_write": false, 00:11:14.936 "abort": true, 00:11:14.936 "nvme_admin": false, 00:11:14.936 "nvme_io": false 00:11:14.936 }, 00:11:14.936 "memory_domains": [ 00:11:14.936 { 00:11:14.936 "dma_device_id": "system", 00:11:14.936 "dma_device_type": 1 00:11:14.936 }, 00:11:14.936 { 00:11:14.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.936 "dma_device_type": 2 00:11:14.936 } 00:11:14.936 ], 00:11:14.936 "driver_specific": {} 00:11:14.936 } 00:11:14.936 ]' 00:11:14.936 08:45:32 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:11:14.936 08:45:32 -- common/autotest_common.sh@1369 -- # bs=512 00:11:14.936 08:45:32 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:11:15.195 08:45:32 -- common/autotest_common.sh@1370 -- # nb=1048576 00:11:15.195 08:45:32 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:11:15.195 08:45:32 -- common/autotest_common.sh@1374 -- # echo 512 00:11:15.195 08:45:32 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:15.195 08:45:32 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.571 08:45:33 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.571 08:45:33 -- common/autotest_common.sh@1184 -- # local i=0 00:11:16.571 08:45:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.571 08:45:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:16.571 08:45:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:18.471 08:45:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:18.471 08:45:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:18.471 08:45:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.471 08:45:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:18.471 08:45:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.471 08:45:35 -- common/autotest_common.sh@1194 -- # return 0 00:11:18.471 08:45:35 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:18.471 08:45:35 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:18.471 08:45:35 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:18.471 08:45:35 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:18.471 08:45:35 -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:18.471 08:45:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:18.471 08:45:35 -- setup/common.sh@80 -- # echo 536870912 00:11:18.471 08:45:35 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:18.471 08:45:35 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:18.471 08:45:35 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:18.471 08:45:35 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:19.038 08:45:35 -- target/filesystem.sh@69 -- # partprobe 00:11:19.295 08:45:36 -- target/filesystem.sh@70 -- # sleep 1 00:11:20.670 08:45:37 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:20.670 08:45:37 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:20.670 08:45:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:20.670 08:45:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:20.670 08:45:37 -- common/autotest_common.sh@10 -- # set +x 00:11:20.670 ************************************ 00:11:20.670 START TEST filesystem_in_capsule_ext4 00:11:20.670 ************************************ 00:11:20.670 08:45:37 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:20.670 08:45:37 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:20.670 08:45:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:20.670 08:45:37 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:20.670 08:45:37 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:11:20.670 08:45:37 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:20.670 08:45:37 -- common/autotest_common.sh@914 -- # local i=0 00:11:20.670 08:45:37 -- common/autotest_common.sh@915 -- # local force 00:11:20.670 08:45:37 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:11:20.670 08:45:37 -- common/autotest_common.sh@918 -- # force=-F 00:11:20.670 08:45:37 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:20.670 mke2fs 1.46.5 (30-Dec-2021) 00:11:20.670 Discarding device blocks: 0/522240 done 00:11:20.670 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:20.670 Filesystem UUID: 1f861957-1b33-4936-8dfb-26698afe5c55 00:11:20.670 Superblock backups stored on blocks: 00:11:20.670 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:20.670 00:11:20.670 Allocating group tables: 0/64 done 00:11:20.670 Writing inode tables: 0/64 done 00:11:20.929 Creating journal (8192 blocks): done 00:11:20.929 Writing superblocks and filesystem accounting information: 0/64 done 00:11:20.929 00:11:20.929 08:45:37 -- common/autotest_common.sh@931 -- # return 0 00:11:20.929 08:45:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.929 08:45:38 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:21.187 08:45:38 -- target/filesystem.sh@25 -- # sync 00:11:21.187 08:45:38 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:21.187 08:45:38 -- target/filesystem.sh@27 -- # sync 00:11:21.187 08:45:38 -- target/filesystem.sh@29 -- # i=0 00:11:21.187 08:45:38 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.187 08:45:38 -- target/filesystem.sh@37 -- # kill -0 1944541 00:11:21.187 08:45:38 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:21.187 08:45:38 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:21.187 08:45:38 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:21.187 08:45:38 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:21.187 00:11:21.187 real 0m0.545s 00:11:21.187 user 0m0.029s 00:11:21.187 sys 0m0.076s 00:11:21.187 08:45:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:21.187 08:45:38 -- common/autotest_common.sh@10 -- # set +x 00:11:21.187 ************************************ 00:11:21.187 END TEST filesystem_in_capsule_ext4 00:11:21.187 ************************************ 00:11:21.187 08:45:38 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:21.187 08:45:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:21.187 08:45:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.187 08:45:38 -- common/autotest_common.sh@10 -- # set +x 00:11:21.446 ************************************ 00:11:21.446 START TEST filesystem_in_capsule_btrfs 00:11:21.446 ************************************ 00:11:21.446 08:45:38 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:21.446 08:45:38 -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:21.446 08:45:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.446 08:45:38 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:21.446 08:45:38 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:11:21.446 08:45:38 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:21.446 08:45:38 -- common/autotest_common.sh@914 -- # local i=0 00:11:21.446 08:45:38 -- common/autotest_common.sh@915 -- # local force 00:11:21.446 08:45:38 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:11:21.446 08:45:38 -- common/autotest_common.sh@920 -- # force=-f 00:11:21.446 08:45:38 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:21.704 btrfs-progs v6.6.2 00:11:21.704 See https://btrfs.readthedocs.io for more information. 00:11:21.704 00:11:21.704 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:21.704 NOTE: several default settings have changed in version 5.15, please make sure 00:11:21.704 this does not affect your deployments: 00:11:21.704 - DUP for metadata (-m dup) 00:11:21.704 - enabled no-holes (-O no-holes) 00:11:21.704 - enabled free-space-tree (-R free-space-tree) 00:11:21.704 00:11:21.704 Label: (null) 00:11:21.704 UUID: 4bda8373-37ad-4b9b-89db-529985253665 00:11:21.704 Node size: 16384 00:11:21.704 Sector size: 4096 00:11:21.704 Filesystem size: 510.00MiB 00:11:21.704 Block group profiles: 00:11:21.704 Data: single 8.00MiB 00:11:21.704 Metadata: DUP 32.00MiB 00:11:21.704 System: DUP 8.00MiB 00:11:21.704 SSD detected: yes 00:11:21.704 Zoned device: no 00:11:21.704 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:21.704 Runtime features: free-space-tree 00:11:21.704 Checksum: crc32c 00:11:21.704 Number of devices: 1 00:11:21.704 Devices: 00:11:21.704 ID SIZE PATH 00:11:21.704 1 510.00MiB /dev/nvme0n1p1 00:11:21.704 00:11:21.704 08:45:38 -- common/autotest_common.sh@931 -- # return 0 00:11:21.704 08:45:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:22.271 08:45:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:22.271 08:45:39 -- target/filesystem.sh@25 -- # sync 00:11:22.271 08:45:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:22.271 08:45:39 -- target/filesystem.sh@27 -- # sync 00:11:22.271 08:45:39 -- target/filesystem.sh@29 -- # i=0 00:11:22.271 08:45:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:22.271 08:45:39 -- target/filesystem.sh@37 -- # kill -0 1944541 00:11:22.271 08:45:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:22.271 08:45:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:22.271 08:45:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:22.271 08:45:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:22.529 00:11:22.529 real 0m1.037s 00:11:22.529 user 0m0.032s 00:11:22.529 sys 0m0.143s 00:11:22.529 08:45:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:22.529 08:45:39 -- common/autotest_common.sh@10 -- # set +x 00:11:22.529 ************************************ 00:11:22.529 END TEST filesystem_in_capsule_btrfs 00:11:22.529 ************************************ 00:11:22.529 08:45:39 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:22.529 08:45:39 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:22.529 08:45:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.529 08:45:39 -- common/autotest_common.sh@10 -- # set +x 00:11:22.529 ************************************ 00:11:22.529 START TEST filesystem_in_capsule_xfs 00:11:22.529 ************************************ 00:11:22.529 08:45:39 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:11:22.529 08:45:39 -- target/filesystem.sh@18 -- # fstype=xfs 00:11:22.529 08:45:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:22.529 08:45:39 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:22.529 08:45:39 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:11:22.529 08:45:39 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:22.529 08:45:39 -- common/autotest_common.sh@914 -- # local i=0 00:11:22.529 08:45:39 -- common/autotest_common.sh@915 -- # local force 00:11:22.529 08:45:39 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:11:22.529 08:45:39 -- common/autotest_common.sh@920 -- # force=-f 00:11:22.529 08:45:39 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:22.787 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:22.787 = sectsz=512 attr=2, projid32bit=1 00:11:22.787 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:22.787 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:22.787 data = bsize=4096 blocks=130560, imaxpct=25 00:11:22.787 = sunit=0 swidth=0 blks 00:11:22.787 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:22.787 log =internal log bsize=4096 blocks=16384, version=2 00:11:22.787 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:22.787 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:23.719 Discarding blocks...Done. 00:11:23.719 08:45:40 -- common/autotest_common.sh@931 -- # return 0 00:11:23.719 08:45:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.675 08:45:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.675 08:45:42 -- target/filesystem.sh@25 -- # sync 00:11:25.675 08:45:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.675 08:45:42 -- target/filesystem.sh@27 -- # sync 00:11:25.675 08:45:42 -- target/filesystem.sh@29 -- # i=0 00:11:25.675 08:45:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.675 08:45:42 -- target/filesystem.sh@37 -- # kill -0 1944541 00:11:25.675 08:45:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.675 08:45:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.675 08:45:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.675 08:45:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.675 00:11:25.675 real 0m2.837s 00:11:25.675 user 0m0.026s 00:11:25.675 sys 0m0.087s 00:11:25.675 08:45:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:25.675 08:45:42 -- common/autotest_common.sh@10 -- # set +x 00:11:25.675 ************************************ 00:11:25.675 END TEST filesystem_in_capsule_xfs 00:11:25.675 ************************************ 00:11:25.675 08:45:42 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:25.675 08:45:42 -- target/filesystem.sh@93 -- # sync 00:11:25.675 08:45:42 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.675 08:45:42 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.675 08:45:42 -- common/autotest_common.sh@1205 -- # local i=0 00:11:25.675 08:45:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:25.675 08:45:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.675 08:45:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:25.675 08:45:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.675 08:45:42 -- common/autotest_common.sh@1217 -- # return 0 00:11:25.675 08:45:42 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.675 08:45:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.675 08:45:42 -- common/autotest_common.sh@10 -- # set +x 00:11:25.675 08:45:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.675 08:45:42 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:25.675 08:45:42 -- target/filesystem.sh@101 -- # killprocess 1944541 00:11:25.675 08:45:42 -- common/autotest_common.sh@936 -- # '[' -z 1944541 ']' 00:11:25.675 08:45:42 -- common/autotest_common.sh@940 -- # kill -0 1944541 00:11:25.675 08:45:42 -- common/autotest_common.sh@941 -- # uname 00:11:25.675 08:45:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:25.675 08:45:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1944541 00:11:25.932 08:45:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:25.932 08:45:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:25.932 08:45:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1944541' 00:11:25.932 killing process with pid 1944541 00:11:25.932 08:45:42 -- common/autotest_common.sh@955 -- # kill 1944541 00:11:25.932 08:45:42 -- common/autotest_common.sh@960 -- # wait 1944541 00:11:26.191 08:45:43 -- target/filesystem.sh@102 -- # nvmfpid= 00:11:26.191 00:11:26.191 real 0m12.268s 00:11:26.191 user 0m47.823s 00:11:26.191 sys 0m2.022s 00:11:26.191 08:45:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:26.191 08:45:43 -- common/autotest_common.sh@10 -- # set +x 00:11:26.191 ************************************ 00:11:26.191 END TEST nvmf_filesystem_in_capsule 00:11:26.192 ************************************ 00:11:26.192 08:45:43 -- target/filesystem.sh@108 -- # nvmftestfini 00:11:26.192 08:45:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:26.192 08:45:43 -- nvmf/common.sh@117 -- # sync 00:11:26.192 08:45:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.192 08:45:43 -- nvmf/common.sh@120 -- # set +e 00:11:26.192 08:45:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.192 08:45:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.192 rmmod nvme_tcp 00:11:26.192 rmmod nvme_fabrics 00:11:26.192 rmmod nvme_keyring 00:11:26.192 08:45:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.192 08:45:43 -- nvmf/common.sh@124 -- # set -e 00:11:26.192 08:45:43 -- nvmf/common.sh@125 -- # return 0 00:11:26.192 08:45:43 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:11:26.192 08:45:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:26.192 08:45:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:26.192 08:45:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:26.192 08:45:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.192 08:45:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.192 08:45:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.192 08:45:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.192 08:45:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.777 08:45:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:28.777 00:11:28.777 real 0m34.709s 00:11:28.777 user 1m38.778s 00:11:28.777 sys 0m9.786s 00:11:28.777 08:45:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:28.777 08:45:45 -- common/autotest_common.sh@10 -- # set +x 00:11:28.777 ************************************ 00:11:28.777 END TEST nvmf_filesystem 00:11:28.777 ************************************ 00:11:28.777 08:45:45 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:28.777 08:45:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:28.777 08:45:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.777 08:45:45 -- common/autotest_common.sh@10 -- # set +x 00:11:28.777 ************************************ 00:11:28.777 START TEST nvmf_discovery 00:11:28.777 ************************************ 00:11:28.777 08:45:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:28.777 * Looking for test storage... 00:11:28.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.777 08:45:45 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.777 08:45:45 -- nvmf/common.sh@7 -- # uname -s 00:11:28.777 08:45:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.777 08:45:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.777 08:45:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.777 08:45:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.777 08:45:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.777 08:45:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.777 08:45:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.777 08:45:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.777 08:45:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.777 08:45:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.777 08:45:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:28.777 08:45:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:28.777 08:45:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.777 08:45:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.777 08:45:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.777 08:45:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.777 08:45:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.777 08:45:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.777 08:45:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.777 08:45:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.777 08:45:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.777 08:45:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.778 08:45:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.778 08:45:45 -- paths/export.sh@5 -- # export PATH 00:11:28.778 08:45:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.778 08:45:45 -- nvmf/common.sh@47 -- # : 0 00:11:28.778 08:45:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.778 08:45:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.778 08:45:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.778 08:45:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.778 08:45:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.778 08:45:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.778 08:45:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.778 08:45:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.778 08:45:45 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:28.778 08:45:45 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:28.778 08:45:45 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:28.778 08:45:45 -- target/discovery.sh@15 -- # hash nvme 00:11:28.778 08:45:45 -- target/discovery.sh@20 -- # nvmftestinit 00:11:28.778 08:45:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:28.778 08:45:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.778 08:45:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:28.778 08:45:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:28.778 08:45:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:28.778 08:45:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.778 08:45:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.778 08:45:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.778 08:45:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:28.778 08:45:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:28.778 08:45:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.778 08:45:45 -- common/autotest_common.sh@10 -- # set +x 00:11:35.343 08:45:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:35.343 08:45:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.343 08:45:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.343 08:45:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.343 08:45:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.343 08:45:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.343 08:45:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.343 08:45:52 -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.343 08:45:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.343 08:45:52 -- nvmf/common.sh@296 -- # e810=() 00:11:35.343 08:45:52 -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.343 08:45:52 -- nvmf/common.sh@297 -- # x722=() 00:11:35.343 08:45:52 -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.343 08:45:52 -- nvmf/common.sh@298 -- # mlx=() 00:11:35.343 08:45:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.343 08:45:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.343 08:45:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.343 08:45:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.343 08:45:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.343 08:45:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.343 08:45:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.343 08:45:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.343 08:45:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.343 08:45:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.343 08:45:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.343 08:45:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.343 08:45:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.343 08:45:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.343 08:45:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.343 08:45:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.343 08:45:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:35.343 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:35.343 08:45:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.343 08:45:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:35.343 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:35.343 08:45:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.343 08:45:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.343 08:45:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.343 08:45:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:35.343 08:45:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.343 08:45:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:35.343 Found net devices under 0000:af:00.0: cvl_0_0 00:11:35.343 08:45:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.343 08:45:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.343 08:45:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.343 08:45:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:35.343 08:45:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.343 08:45:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:35.343 Found net devices under 0000:af:00.1: cvl_0_1 00:11:35.343 08:45:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.343 08:45:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:35.343 08:45:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:35.343 08:45:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:35.343 08:45:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:35.343 08:45:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.343 08:45:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.343 08:45:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.344 08:45:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.344 08:45:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.344 08:45:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.344 08:45:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.344 08:45:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.344 08:45:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.344 08:45:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.344 08:45:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.344 08:45:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.344 08:45:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.344 08:45:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.344 08:45:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.344 08:45:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.344 08:45:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.344 08:45:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.344 08:45:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.344 08:45:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:11:35.344 00:11:35.344 --- 10.0.0.2 ping statistics --- 00:11:35.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.344 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:11:35.344 08:45:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:11:35.344 00:11:35.344 --- 10.0.0.1 ping statistics --- 00:11:35.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.344 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:11:35.344 08:45:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.344 08:45:52 -- nvmf/common.sh@411 -- # return 0 00:11:35.344 08:45:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:35.344 08:45:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.344 08:45:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:35.344 08:45:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:35.344 08:45:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.344 08:45:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:35.344 08:45:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:35.344 08:45:52 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:35.344 08:45:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:35.344 08:45:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:35.344 08:45:52 -- common/autotest_common.sh@10 -- # set +x 00:11:35.344 08:45:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.344 08:45:52 -- nvmf/common.sh@470 -- # nvmfpid=1950605 00:11:35.344 08:45:52 -- nvmf/common.sh@471 -- # waitforlisten 1950605 00:11:35.344 08:45:52 -- common/autotest_common.sh@817 -- # '[' -z 1950605 ']' 00:11:35.344 08:45:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.344 08:45:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:35.344 08:45:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.344 08:45:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:35.344 08:45:52 -- common/autotest_common.sh@10 -- # set +x 00:11:35.344 [2024-04-26 08:45:52.460352] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:11:35.344 [2024-04-26 08:45:52.460397] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.344 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.344 [2024-04-26 08:45:52.535349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.602 [2024-04-26 08:45:52.611152] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.602 [2024-04-26 08:45:52.611186] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.602 [2024-04-26 08:45:52.611195] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.602 [2024-04-26 08:45:52.611203] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.602 [2024-04-26 08:45:52.611227] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.602 [2024-04-26 08:45:52.611269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.602 [2024-04-26 08:45:52.611362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.602 [2024-04-26 08:45:52.611445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.602 [2024-04-26 08:45:52.611447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.168 08:45:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:36.168 08:45:53 -- common/autotest_common.sh@850 -- # return 0 00:11:36.168 08:45:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:36.168 08:45:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:36.168 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.168 08:45:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.168 08:45:53 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:36.168 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.168 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.168 [2024-04-26 08:45:53.340413] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.168 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.168 08:45:53 -- target/discovery.sh@26 -- # seq 1 4 00:11:36.168 08:45:53 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:36.168 08:45:53 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:36.168 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.168 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.168 Null1 00:11:36.168 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.168 08:45:53 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:36.168 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.168 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.168 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.168 08:45:53 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:36.168 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.168 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.168 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.168 08:45:53 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.169 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.169 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.169 [2024-04-26 08:45:53.396754] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.169 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.169 08:45:53 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:36.169 08:45:53 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:36.169 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.169 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.169 Null2 00:11:36.169 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.169 08:45:53 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:36.169 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.169 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:36.430 08:45:53 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 Null3 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:36.430 08:45:53 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 Null4 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:36.430 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.430 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.430 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.430 08:45:53 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:11:36.430 00:11:36.430 Discovery Log Number of Records 6, Generation counter 6 00:11:36.430 =====Discovery Log Entry 0====== 00:11:36.430 trtype: tcp 00:11:36.430 adrfam: ipv4 00:11:36.430 subtype: current discovery subsystem 00:11:36.430 treq: not required 00:11:36.430 portid: 0 00:11:36.430 trsvcid: 4420 00:11:36.430 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:36.430 traddr: 10.0.0.2 00:11:36.430 eflags: explicit discovery connections, duplicate discovery information 00:11:36.430 sectype: none 00:11:36.431 =====Discovery Log Entry 1====== 00:11:36.431 trtype: tcp 00:11:36.431 adrfam: ipv4 00:11:36.431 subtype: nvme subsystem 00:11:36.431 treq: not required 00:11:36.431 portid: 0 00:11:36.431 trsvcid: 4420 00:11:36.431 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:36.431 traddr: 10.0.0.2 00:11:36.431 eflags: none 00:11:36.431 sectype: none 00:11:36.431 =====Discovery Log Entry 2====== 00:11:36.431 trtype: tcp 00:11:36.431 adrfam: ipv4 00:11:36.431 subtype: nvme subsystem 00:11:36.431 treq: not required 00:11:36.431 portid: 0 00:11:36.431 trsvcid: 4420 00:11:36.431 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:36.431 traddr: 10.0.0.2 00:11:36.431 eflags: none 00:11:36.431 sectype: none 00:11:36.431 =====Discovery Log Entry 3====== 00:11:36.431 trtype: tcp 00:11:36.431 adrfam: ipv4 00:11:36.431 subtype: nvme subsystem 00:11:36.431 treq: not required 00:11:36.431 portid: 0 00:11:36.431 trsvcid: 4420 00:11:36.431 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:36.431 traddr: 10.0.0.2 00:11:36.431 eflags: none 00:11:36.431 sectype: none 00:11:36.431 =====Discovery Log Entry 4====== 00:11:36.431 trtype: tcp 00:11:36.431 adrfam: ipv4 00:11:36.431 subtype: nvme subsystem 00:11:36.431 treq: not required 00:11:36.431 portid: 0 00:11:36.431 trsvcid: 4420 00:11:36.431 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:36.431 traddr: 10.0.0.2 00:11:36.431 eflags: none 00:11:36.431 sectype: none 00:11:36.431 =====Discovery Log Entry 5====== 00:11:36.431 trtype: tcp 00:11:36.431 adrfam: ipv4 00:11:36.431 subtype: discovery subsystem referral 00:11:36.431 treq: not required 00:11:36.431 portid: 0 00:11:36.431 trsvcid: 4430 00:11:36.431 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:36.431 traddr: 10.0.0.2 00:11:36.431 eflags: none 00:11:36.431 sectype: none 00:11:36.431 08:45:53 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:36.431 Perform nvmf subsystem discovery via RPC 00:11:36.431 08:45:53 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:36.431 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.431 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.431 [2024-04-26 08:45:53.605235] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:11:36.431 [ 00:11:36.431 { 00:11:36.431 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:36.431 "subtype": "Discovery", 00:11:36.431 "listen_addresses": [ 00:11:36.431 { 00:11:36.431 "transport": "TCP", 00:11:36.431 "trtype": "TCP", 00:11:36.431 "adrfam": "IPv4", 00:11:36.431 "traddr": "10.0.0.2", 00:11:36.431 "trsvcid": "4420" 00:11:36.431 } 00:11:36.431 ], 00:11:36.431 "allow_any_host": true, 00:11:36.431 "hosts": [] 00:11:36.431 }, 00:11:36.431 { 00:11:36.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:36.431 "subtype": "NVMe", 00:11:36.431 "listen_addresses": [ 00:11:36.431 { 00:11:36.431 "transport": "TCP", 00:11:36.431 "trtype": "TCP", 00:11:36.431 "adrfam": "IPv4", 00:11:36.431 "traddr": "10.0.0.2", 00:11:36.431 "trsvcid": "4420" 00:11:36.431 } 00:11:36.431 ], 00:11:36.431 "allow_any_host": true, 00:11:36.431 "hosts": [], 00:11:36.431 "serial_number": "SPDK00000000000001", 00:11:36.431 "model_number": "SPDK bdev Controller", 00:11:36.431 "max_namespaces": 32, 00:11:36.431 "min_cntlid": 1, 00:11:36.431 "max_cntlid": 65519, 00:11:36.431 "namespaces": [ 00:11:36.431 { 00:11:36.431 "nsid": 1, 00:11:36.431 "bdev_name": "Null1", 00:11:36.431 "name": "Null1", 00:11:36.431 "nguid": "1C023E63B01A4BBEA5E1C6B3CB79A835", 00:11:36.431 "uuid": "1c023e63-b01a-4bbe-a5e1-c6b3cb79a835" 00:11:36.431 } 00:11:36.431 ] 00:11:36.431 }, 00:11:36.431 { 00:11:36.431 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:36.431 "subtype": "NVMe", 00:11:36.431 "listen_addresses": [ 00:11:36.431 { 00:11:36.431 "transport": "TCP", 00:11:36.431 "trtype": "TCP", 00:11:36.431 "adrfam": "IPv4", 00:11:36.431 "traddr": "10.0.0.2", 00:11:36.431 "trsvcid": "4420" 00:11:36.431 } 00:11:36.431 ], 00:11:36.431 "allow_any_host": true, 00:11:36.431 "hosts": [], 00:11:36.431 "serial_number": "SPDK00000000000002", 00:11:36.431 "model_number": "SPDK bdev Controller", 00:11:36.431 "max_namespaces": 32, 00:11:36.431 "min_cntlid": 1, 00:11:36.431 "max_cntlid": 65519, 00:11:36.431 "namespaces": [ 00:11:36.431 { 00:11:36.431 "nsid": 1, 00:11:36.431 "bdev_name": "Null2", 00:11:36.431 "name": "Null2", 00:11:36.431 "nguid": "6BB03606019449699A1970DC99D05A4E", 00:11:36.431 "uuid": "6bb03606-0194-4969-9a19-70dc99d05a4e" 00:11:36.431 } 00:11:36.431 ] 00:11:36.431 }, 00:11:36.431 { 00:11:36.431 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:36.431 "subtype": "NVMe", 00:11:36.431 "listen_addresses": [ 00:11:36.431 { 00:11:36.431 "transport": "TCP", 00:11:36.431 "trtype": "TCP", 00:11:36.431 "adrfam": "IPv4", 00:11:36.431 "traddr": "10.0.0.2", 00:11:36.431 "trsvcid": "4420" 00:11:36.431 } 00:11:36.431 ], 00:11:36.431 "allow_any_host": true, 00:11:36.431 "hosts": [], 00:11:36.431 "serial_number": "SPDK00000000000003", 00:11:36.431 "model_number": "SPDK bdev Controller", 00:11:36.431 "max_namespaces": 32, 00:11:36.431 "min_cntlid": 1, 00:11:36.431 "max_cntlid": 65519, 00:11:36.431 "namespaces": [ 00:11:36.431 { 00:11:36.431 "nsid": 1, 00:11:36.431 "bdev_name": "Null3", 00:11:36.431 "name": "Null3", 00:11:36.431 "nguid": "30EE4E8D0013448CA79A9EDEFF14EEAB", 00:11:36.431 "uuid": "30ee4e8d-0013-448c-a79a-9edeff14eeab" 00:11:36.431 } 00:11:36.431 ] 00:11:36.431 }, 00:11:36.431 { 00:11:36.431 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:36.431 "subtype": "NVMe", 00:11:36.431 "listen_addresses": [ 00:11:36.431 { 00:11:36.431 "transport": "TCP", 00:11:36.431 "trtype": "TCP", 00:11:36.431 "adrfam": "IPv4", 00:11:36.431 "traddr": "10.0.0.2", 00:11:36.431 "trsvcid": "4420" 00:11:36.431 } 00:11:36.431 ], 00:11:36.431 "allow_any_host": true, 00:11:36.431 "hosts": [], 00:11:36.431 "serial_number": "SPDK00000000000004", 00:11:36.431 "model_number": "SPDK bdev Controller", 00:11:36.431 "max_namespaces": 32, 00:11:36.431 "min_cntlid": 1, 00:11:36.431 "max_cntlid": 65519, 00:11:36.431 "namespaces": [ 00:11:36.431 { 00:11:36.431 "nsid": 1, 00:11:36.431 "bdev_name": "Null4", 00:11:36.431 "name": "Null4", 00:11:36.431 "nguid": "0A5A441499244CFC8EDFDE91B06F6ACA", 00:11:36.431 "uuid": "0a5a4414-9924-4cfc-8edf-de91b06f6aca" 00:11:36.431 } 00:11:36.431 ] 00:11:36.431 } 00:11:36.431 ] 00:11:36.431 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.431 08:45:53 -- target/discovery.sh@42 -- # seq 1 4 00:11:36.431 08:45:53 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.431 08:45:53 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.431 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.431 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.431 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.431 08:45:53 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:36.431 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.431 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.431 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.431 08:45:53 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.431 08:45:53 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:36.431 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.431 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.431 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.431 08:45:53 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:36.431 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.431 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.690 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.690 08:45:53 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.690 08:45:53 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:36.690 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.690 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.690 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.690 08:45:53 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:36.690 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.690 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.690 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.690 08:45:53 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:36.690 08:45:53 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:36.690 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.690 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.690 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.690 08:45:53 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:36.690 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.690 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.690 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.690 08:45:53 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:36.690 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.690 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.690 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.690 08:45:53 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:36.690 08:45:53 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:36.690 08:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:36.690 08:45:53 -- common/autotest_common.sh@10 -- # set +x 00:11:36.690 08:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:36.690 08:45:53 -- target/discovery.sh@49 -- # check_bdevs= 00:11:36.690 08:45:53 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:36.690 08:45:53 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:36.690 08:45:53 -- target/discovery.sh@57 -- # nvmftestfini 00:11:36.690 08:45:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:36.690 08:45:53 -- nvmf/common.sh@117 -- # sync 00:11:36.691 08:45:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:36.691 08:45:53 -- nvmf/common.sh@120 -- # set +e 00:11:36.691 08:45:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:36.691 08:45:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:36.691 rmmod nvme_tcp 00:11:36.691 rmmod nvme_fabrics 00:11:36.691 rmmod nvme_keyring 00:11:36.691 08:45:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:36.691 08:45:53 -- nvmf/common.sh@124 -- # set -e 00:11:36.691 08:45:53 -- nvmf/common.sh@125 -- # return 0 00:11:36.691 08:45:53 -- nvmf/common.sh@478 -- # '[' -n 1950605 ']' 00:11:36.691 08:45:53 -- nvmf/common.sh@479 -- # killprocess 1950605 00:11:36.691 08:45:53 -- common/autotest_common.sh@936 -- # '[' -z 1950605 ']' 00:11:36.691 08:45:53 -- common/autotest_common.sh@940 -- # kill -0 1950605 00:11:36.691 08:45:53 -- common/autotest_common.sh@941 -- # uname 00:11:36.691 08:45:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:36.691 08:45:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1950605 00:11:36.691 08:45:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:36.691 08:45:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:36.691 08:45:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1950605' 00:11:36.691 killing process with pid 1950605 00:11:36.691 08:45:53 -- common/autotest_common.sh@955 -- # kill 1950605 00:11:36.691 [2024-04-26 08:45:53.875869] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:11:36.691 08:45:53 -- common/autotest_common.sh@960 -- # wait 1950605 00:11:36.949 08:45:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:36.949 08:45:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:36.949 08:45:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:36.949 08:45:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.949 08:45:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.949 08:45:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.949 08:45:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.949 08:45:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.483 08:45:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:39.483 00:11:39.483 real 0m10.484s 00:11:39.483 user 0m7.790s 00:11:39.483 sys 0m5.434s 00:11:39.483 08:45:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:39.483 08:45:56 -- common/autotest_common.sh@10 -- # set +x 00:11:39.483 ************************************ 00:11:39.483 END TEST nvmf_discovery 00:11:39.483 ************************************ 00:11:39.483 08:45:56 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:39.483 08:45:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:39.483 08:45:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:39.483 08:45:56 -- common/autotest_common.sh@10 -- # set +x 00:11:39.483 ************************************ 00:11:39.483 START TEST nvmf_referrals 00:11:39.483 ************************************ 00:11:39.483 08:45:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:39.483 * Looking for test storage... 00:11:39.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.483 08:45:56 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.483 08:45:56 -- nvmf/common.sh@7 -- # uname -s 00:11:39.483 08:45:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.483 08:45:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.483 08:45:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.483 08:45:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.483 08:45:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.483 08:45:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.483 08:45:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.483 08:45:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.483 08:45:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.483 08:45:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.483 08:45:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:39.483 08:45:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:39.483 08:45:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.483 08:45:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.483 08:45:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.483 08:45:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.483 08:45:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.483 08:45:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.483 08:45:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.483 08:45:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.483 08:45:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.483 08:45:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.484 08:45:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.484 08:45:56 -- paths/export.sh@5 -- # export PATH 00:11:39.484 08:45:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.484 08:45:56 -- nvmf/common.sh@47 -- # : 0 00:11:39.484 08:45:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.484 08:45:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.484 08:45:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.484 08:45:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.484 08:45:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.484 08:45:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.484 08:45:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.484 08:45:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.484 08:45:56 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:39.484 08:45:56 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:39.484 08:45:56 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:39.484 08:45:56 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:39.484 08:45:56 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:39.484 08:45:56 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:39.484 08:45:56 -- target/referrals.sh@37 -- # nvmftestinit 00:11:39.484 08:45:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:39.484 08:45:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.484 08:45:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:39.484 08:45:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:39.484 08:45:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:39.484 08:45:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.484 08:45:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.484 08:45:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.484 08:45:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:39.484 08:45:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:39.484 08:45:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.484 08:45:56 -- common/autotest_common.sh@10 -- # set +x 00:11:46.058 08:46:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:46.058 08:46:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:46.058 08:46:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:46.058 08:46:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:46.058 08:46:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:46.058 08:46:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:46.058 08:46:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:46.058 08:46:03 -- nvmf/common.sh@295 -- # net_devs=() 00:11:46.058 08:46:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:46.058 08:46:03 -- nvmf/common.sh@296 -- # e810=() 00:11:46.058 08:46:03 -- nvmf/common.sh@296 -- # local -ga e810 00:11:46.058 08:46:03 -- nvmf/common.sh@297 -- # x722=() 00:11:46.058 08:46:03 -- nvmf/common.sh@297 -- # local -ga x722 00:11:46.058 08:46:03 -- nvmf/common.sh@298 -- # mlx=() 00:11:46.058 08:46:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:46.058 08:46:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.058 08:46:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.058 08:46:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.058 08:46:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.058 08:46:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.058 08:46:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.058 08:46:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.058 08:46:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.058 08:46:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.058 08:46:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.058 08:46:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.058 08:46:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:46.058 08:46:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:46.058 08:46:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:46.058 08:46:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:46.058 08:46:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:46.058 08:46:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:46.058 08:46:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.059 08:46:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:46.059 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:46.059 08:46:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.059 08:46:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:46.059 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:46.059 08:46:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:46.059 08:46:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.059 08:46:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.059 08:46:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:46.059 08:46:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.059 08:46:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:46.059 Found net devices under 0000:af:00.0: cvl_0_0 00:11:46.059 08:46:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.059 08:46:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.059 08:46:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.059 08:46:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:46.059 08:46:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.059 08:46:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:46.059 Found net devices under 0000:af:00.1: cvl_0_1 00:11:46.059 08:46:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.059 08:46:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:46.059 08:46:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:46.059 08:46:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:46.059 08:46:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:46.059 08:46:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.059 08:46:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.059 08:46:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.059 08:46:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:46.059 08:46:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.059 08:46:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.059 08:46:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:46.059 08:46:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.059 08:46:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.059 08:46:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:46.059 08:46:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:46.059 08:46:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.059 08:46:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.059 08:46:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.059 08:46:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.059 08:46:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:46.059 08:46:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.318 08:46:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.318 08:46:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.318 08:46:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:46.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:11:46.318 00:11:46.318 --- 10.0.0.2 ping statistics --- 00:11:46.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.318 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:11:46.318 08:46:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:11:46.318 00:11:46.318 --- 10.0.0.1 ping statistics --- 00:11:46.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.318 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:11:46.318 08:46:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.318 08:46:03 -- nvmf/common.sh@411 -- # return 0 00:11:46.318 08:46:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:46.318 08:46:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.318 08:46:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:46.318 08:46:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:46.318 08:46:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.318 08:46:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:46.318 08:46:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:46.318 08:46:03 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:46.318 08:46:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:46.318 08:46:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:46.318 08:46:03 -- common/autotest_common.sh@10 -- # set +x 00:11:46.318 08:46:03 -- nvmf/common.sh@470 -- # nvmfpid=1954604 00:11:46.318 08:46:03 -- nvmf/common.sh@471 -- # waitforlisten 1954604 00:11:46.318 08:46:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.319 08:46:03 -- common/autotest_common.sh@817 -- # '[' -z 1954604 ']' 00:11:46.319 08:46:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.319 08:46:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:46.319 08:46:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.319 08:46:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:46.319 08:46:03 -- common/autotest_common.sh@10 -- # set +x 00:11:46.319 [2024-04-26 08:46:03.551006] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:11:46.319 [2024-04-26 08:46:03.551054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.578 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.578 [2024-04-26 08:46:03.626499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.578 [2024-04-26 08:46:03.698762] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.578 [2024-04-26 08:46:03.698797] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.578 [2024-04-26 08:46:03.698807] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.578 [2024-04-26 08:46:03.698815] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.578 [2024-04-26 08:46:03.698822] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.578 [2024-04-26 08:46:03.698873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.578 [2024-04-26 08:46:03.698967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.578 [2024-04-26 08:46:03.699049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.578 [2024-04-26 08:46:03.699051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.145 08:46:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:47.145 08:46:04 -- common/autotest_common.sh@850 -- # return 0 00:11:47.145 08:46:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:47.145 08:46:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:47.145 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.403 08:46:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.403 08:46:04 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.403 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.403 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.403 [2024-04-26 08:46:04.417168] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.403 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.403 08:46:04 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:47.403 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.403 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.403 [2024-04-26 08:46:04.433373] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:47.403 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.403 08:46:04 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:47.403 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.403 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.403 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.403 08:46:04 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:47.403 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.403 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.403 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.403 08:46:04 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:47.403 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.403 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.403 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.403 08:46:04 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.403 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.403 08:46:04 -- target/referrals.sh@48 -- # jq length 00:11:47.403 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.403 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.403 08:46:04 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:47.403 08:46:04 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:47.403 08:46:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:47.403 08:46:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.403 08:46:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:47.403 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.403 08:46:04 -- target/referrals.sh@21 -- # sort 00:11:47.403 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.403 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.403 08:46:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:47.403 08:46:04 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:47.403 08:46:04 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:47.403 08:46:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:47.403 08:46:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:47.403 08:46:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.404 08:46:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:47.404 08:46:04 -- target/referrals.sh@26 -- # sort 00:11:47.662 08:46:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:47.662 08:46:04 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:47.662 08:46:04 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:47.662 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.662 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.662 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.662 08:46:04 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:47.662 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.662 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.662 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.662 08:46:04 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:47.662 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.662 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.662 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.662 08:46:04 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.662 08:46:04 -- target/referrals.sh@56 -- # jq length 00:11:47.662 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.662 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.662 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.662 08:46:04 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:47.662 08:46:04 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:47.662 08:46:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:47.662 08:46:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:47.662 08:46:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.662 08:46:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:47.662 08:46:04 -- target/referrals.sh@26 -- # sort 00:11:47.920 08:46:04 -- target/referrals.sh@26 -- # echo 00:11:47.920 08:46:04 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:47.920 08:46:04 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:47.920 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.920 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.920 08:46:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.920 08:46:04 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:47.920 08:46:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.920 08:46:04 -- common/autotest_common.sh@10 -- # set +x 00:11:47.920 08:46:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.920 08:46:05 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:47.920 08:46:05 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:47.920 08:46:05 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.920 08:46:05 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:47.920 08:46:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.920 08:46:05 -- common/autotest_common.sh@10 -- # set +x 00:11:47.920 08:46:05 -- target/referrals.sh@21 -- # sort 00:11:47.920 08:46:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.920 08:46:05 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:47.920 08:46:05 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:47.920 08:46:05 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:47.920 08:46:05 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:47.920 08:46:05 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:47.920 08:46:05 -- target/referrals.sh@26 -- # sort 00:11:47.920 08:46:05 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.920 08:46:05 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:48.179 08:46:05 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:48.179 08:46:05 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:48.179 08:46:05 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:48.179 08:46:05 -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:48.179 08:46:05 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:48.179 08:46:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.179 08:46:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:48.179 08:46:05 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:48.179 08:46:05 -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:48.179 08:46:05 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:48.179 08:46:05 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:48.179 08:46:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.179 08:46:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:48.437 08:46:05 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:48.437 08:46:05 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:48.437 08:46:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.437 08:46:05 -- common/autotest_common.sh@10 -- # set +x 00:11:48.437 08:46:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.437 08:46:05 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:48.437 08:46:05 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:48.437 08:46:05 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:48.437 08:46:05 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:48.437 08:46:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.437 08:46:05 -- common/autotest_common.sh@10 -- # set +x 00:11:48.437 08:46:05 -- target/referrals.sh@21 -- # sort 00:11:48.437 08:46:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.437 08:46:05 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:48.437 08:46:05 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:48.437 08:46:05 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:48.437 08:46:05 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:48.437 08:46:05 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:48.437 08:46:05 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.437 08:46:05 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:48.437 08:46:05 -- target/referrals.sh@26 -- # sort 00:11:48.696 08:46:05 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:48.696 08:46:05 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:48.696 08:46:05 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:48.696 08:46:05 -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:48.696 08:46:05 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:48.696 08:46:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.696 08:46:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:48.696 08:46:05 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:48.696 08:46:05 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:48.696 08:46:05 -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:48.696 08:46:05 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:48.696 08:46:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.696 08:46:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:48.955 08:46:05 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:48.955 08:46:05 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:48.955 08:46:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.955 08:46:05 -- common/autotest_common.sh@10 -- # set +x 00:11:48.955 08:46:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.955 08:46:05 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:48.955 08:46:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.955 08:46:05 -- target/referrals.sh@82 -- # jq length 00:11:48.955 08:46:05 -- common/autotest_common.sh@10 -- # set +x 00:11:48.955 08:46:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.955 08:46:06 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:48.955 08:46:06 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:48.955 08:46:06 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:48.955 08:46:06 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:48.955 08:46:06 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.955 08:46:06 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:48.955 08:46:06 -- target/referrals.sh@26 -- # sort 00:11:48.955 08:46:06 -- target/referrals.sh@26 -- # echo 00:11:48.955 08:46:06 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:48.955 08:46:06 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:48.955 08:46:06 -- target/referrals.sh@86 -- # nvmftestfini 00:11:48.955 08:46:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:48.955 08:46:06 -- nvmf/common.sh@117 -- # sync 00:11:48.955 08:46:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.955 08:46:06 -- nvmf/common.sh@120 -- # set +e 00:11:48.955 08:46:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.955 08:46:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.955 rmmod nvme_tcp 00:11:48.955 rmmod nvme_fabrics 00:11:48.955 rmmod nvme_keyring 00:11:48.955 08:46:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.955 08:46:06 -- nvmf/common.sh@124 -- # set -e 00:11:48.955 08:46:06 -- nvmf/common.sh@125 -- # return 0 00:11:48.955 08:46:06 -- nvmf/common.sh@478 -- # '[' -n 1954604 ']' 00:11:48.955 08:46:06 -- nvmf/common.sh@479 -- # killprocess 1954604 00:11:48.955 08:46:06 -- common/autotest_common.sh@936 -- # '[' -z 1954604 ']' 00:11:48.955 08:46:06 -- common/autotest_common.sh@940 -- # kill -0 1954604 00:11:48.955 08:46:06 -- common/autotest_common.sh@941 -- # uname 00:11:48.955 08:46:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.955 08:46:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1954604 00:11:49.214 08:46:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:49.214 08:46:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:49.214 08:46:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1954604' 00:11:49.214 killing process with pid 1954604 00:11:49.214 08:46:06 -- common/autotest_common.sh@955 -- # kill 1954604 00:11:49.214 08:46:06 -- common/autotest_common.sh@960 -- # wait 1954604 00:11:49.214 08:46:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:49.214 08:46:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:49.214 08:46:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:49.214 08:46:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.214 08:46:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:49.214 08:46:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.214 08:46:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:49.214 08:46:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.750 08:46:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:51.750 00:11:51.750 real 0m12.147s 00:11:51.750 user 0m13.643s 00:11:51.750 sys 0m6.146s 00:11:51.750 08:46:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:51.750 08:46:08 -- common/autotest_common.sh@10 -- # set +x 00:11:51.750 ************************************ 00:11:51.750 END TEST nvmf_referrals 00:11:51.750 ************************************ 00:11:51.750 08:46:08 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:51.750 08:46:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:51.750 08:46:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:51.750 08:46:08 -- common/autotest_common.sh@10 -- # set +x 00:11:51.750 ************************************ 00:11:51.750 START TEST nvmf_connect_disconnect 00:11:51.750 ************************************ 00:11:51.750 08:46:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:51.750 * Looking for test storage... 00:11:51.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.750 08:46:08 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.750 08:46:08 -- nvmf/common.sh@7 -- # uname -s 00:11:51.750 08:46:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.750 08:46:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.750 08:46:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.750 08:46:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.750 08:46:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.750 08:46:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.750 08:46:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.750 08:46:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.750 08:46:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.750 08:46:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.750 08:46:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:51.750 08:46:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:51.750 08:46:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.750 08:46:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.750 08:46:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.750 08:46:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.750 08:46:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.750 08:46:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.750 08:46:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.750 08:46:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.750 08:46:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.750 08:46:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.751 08:46:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.751 08:46:08 -- paths/export.sh@5 -- # export PATH 00:11:51.751 08:46:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.751 08:46:08 -- nvmf/common.sh@47 -- # : 0 00:11:51.751 08:46:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:51.751 08:46:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:51.751 08:46:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.751 08:46:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.751 08:46:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.751 08:46:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:51.751 08:46:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:51.751 08:46:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:51.751 08:46:08 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:51.751 08:46:08 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:51.751 08:46:08 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:51.751 08:46:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:51.751 08:46:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.751 08:46:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:51.751 08:46:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:51.751 08:46:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:51.751 08:46:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.751 08:46:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:51.751 08:46:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.751 08:46:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:51.751 08:46:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:51.751 08:46:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:51.751 08:46:08 -- common/autotest_common.sh@10 -- # set +x 00:11:58.316 08:46:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:58.316 08:46:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:58.316 08:46:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:58.316 08:46:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:58.316 08:46:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:58.316 08:46:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:58.316 08:46:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:58.316 08:46:15 -- nvmf/common.sh@295 -- # net_devs=() 00:11:58.316 08:46:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:58.316 08:46:15 -- nvmf/common.sh@296 -- # e810=() 00:11:58.316 08:46:15 -- nvmf/common.sh@296 -- # local -ga e810 00:11:58.316 08:46:15 -- nvmf/common.sh@297 -- # x722=() 00:11:58.316 08:46:15 -- nvmf/common.sh@297 -- # local -ga x722 00:11:58.316 08:46:15 -- nvmf/common.sh@298 -- # mlx=() 00:11:58.316 08:46:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:58.316 08:46:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.316 08:46:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.316 08:46:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.316 08:46:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.316 08:46:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.316 08:46:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.316 08:46:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.316 08:46:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.316 08:46:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.316 08:46:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.316 08:46:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.316 08:46:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:58.316 08:46:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:58.316 08:46:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:58.316 08:46:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.316 08:46:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:58.316 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:58.316 08:46:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:58.316 08:46:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:58.316 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:58.316 08:46:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:58.316 08:46:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.316 08:46:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.316 08:46:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:58.316 08:46:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.316 08:46:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:58.316 Found net devices under 0000:af:00.0: cvl_0_0 00:11:58.316 08:46:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.316 08:46:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:58.316 08:46:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.316 08:46:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:58.316 08:46:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.316 08:46:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:58.316 Found net devices under 0000:af:00.1: cvl_0_1 00:11:58.316 08:46:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.316 08:46:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:58.316 08:46:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:58.316 08:46:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:58.316 08:46:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:58.316 08:46:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.316 08:46:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.316 08:46:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.316 08:46:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:58.316 08:46:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.316 08:46:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.316 08:46:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:58.316 08:46:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.316 08:46:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.316 08:46:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:58.316 08:46:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:58.316 08:46:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.316 08:46:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.316 08:46:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.316 08:46:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.316 08:46:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:58.316 08:46:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.575 08:46:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.575 08:46:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.575 08:46:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:58.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:11:58.575 00:11:58.575 --- 10.0.0.2 ping statistics --- 00:11:58.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.575 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:11:58.575 08:46:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:11:58.575 00:11:58.575 --- 10.0.0.1 ping statistics --- 00:11:58.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.575 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:11:58.575 08:46:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.575 08:46:15 -- nvmf/common.sh@411 -- # return 0 00:11:58.575 08:46:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:58.575 08:46:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.575 08:46:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:58.575 08:46:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:58.575 08:46:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.575 08:46:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:58.575 08:46:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:58.575 08:46:15 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:58.575 08:46:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:58.575 08:46:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:58.575 08:46:15 -- common/autotest_common.sh@10 -- # set +x 00:11:58.575 08:46:15 -- nvmf/common.sh@470 -- # nvmfpid=1958937 00:11:58.575 08:46:15 -- nvmf/common.sh@471 -- # waitforlisten 1958937 00:11:58.575 08:46:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.575 08:46:15 -- common/autotest_common.sh@817 -- # '[' -z 1958937 ']' 00:11:58.575 08:46:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.575 08:46:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:58.575 08:46:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.575 08:46:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:58.575 08:46:15 -- common/autotest_common.sh@10 -- # set +x 00:11:58.575 [2024-04-26 08:46:15.729323] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:11:58.575 [2024-04-26 08:46:15.729372] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.575 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.575 [2024-04-26 08:46:15.806111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.836 [2024-04-26 08:46:15.879320] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.836 [2024-04-26 08:46:15.879358] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.836 [2024-04-26 08:46:15.879367] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.836 [2024-04-26 08:46:15.879376] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.836 [2024-04-26 08:46:15.879399] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.836 [2024-04-26 08:46:15.879469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.836 [2024-04-26 08:46:15.879525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.836 [2024-04-26 08:46:15.879618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.836 [2024-04-26 08:46:15.879620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.403 08:46:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:59.403 08:46:16 -- common/autotest_common.sh@850 -- # return 0 00:11:59.403 08:46:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:59.403 08:46:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:59.403 08:46:16 -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 08:46:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.404 08:46:16 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:59.404 08:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.404 08:46:16 -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 [2024-04-26 08:46:16.573233] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.404 08:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.404 08:46:16 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:59.404 08:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.404 08:46:16 -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 08:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.404 08:46:16 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:59.404 08:46:16 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:59.404 08:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.404 08:46:16 -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 08:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.404 08:46:16 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:59.404 08:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.404 08:46:16 -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 08:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.404 08:46:16 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.404 08:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:59.404 08:46:16 -- common/autotest_common.sh@10 -- # set +x 00:11:59.404 [2024-04-26 08:46:16.627742] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.404 08:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:59.404 08:46:16 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:59.404 08:46:16 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:59.404 08:46:16 -- target/connect_disconnect.sh@34 -- # set +x 00:12:03.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.688 08:46:34 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:17.688 08:46:34 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:17.688 08:46:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:17.688 08:46:34 -- nvmf/common.sh@117 -- # sync 00:12:17.688 08:46:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:17.688 08:46:34 -- nvmf/common.sh@120 -- # set +e 00:12:17.688 08:46:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:17.688 08:46:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:17.688 rmmod nvme_tcp 00:12:17.688 rmmod nvme_fabrics 00:12:17.688 rmmod nvme_keyring 00:12:17.688 08:46:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:17.688 08:46:34 -- nvmf/common.sh@124 -- # set -e 00:12:17.688 08:46:34 -- nvmf/common.sh@125 -- # return 0 00:12:17.688 08:46:34 -- nvmf/common.sh@478 -- # '[' -n 1958937 ']' 00:12:17.688 08:46:34 -- nvmf/common.sh@479 -- # killprocess 1958937 00:12:17.688 08:46:34 -- common/autotest_common.sh@936 -- # '[' -z 1958937 ']' 00:12:17.688 08:46:34 -- common/autotest_common.sh@940 -- # kill -0 1958937 00:12:17.688 08:46:34 -- common/autotest_common.sh@941 -- # uname 00:12:17.688 08:46:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:17.688 08:46:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1958937 00:12:17.688 08:46:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:17.688 08:46:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:17.688 08:46:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1958937' 00:12:17.688 killing process with pid 1958937 00:12:17.688 08:46:34 -- common/autotest_common.sh@955 -- # kill 1958937 00:12:17.688 08:46:34 -- common/autotest_common.sh@960 -- # wait 1958937 00:12:17.688 08:46:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:17.688 08:46:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:17.688 08:46:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:17.688 08:46:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:17.688 08:46:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:17.688 08:46:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.688 08:46:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.688 08:46:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.595 08:46:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.595 00:12:19.595 real 0m27.773s 00:12:19.595 user 1m14.859s 00:12:19.595 sys 0m7.196s 00:12:19.595 08:46:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.595 08:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:19.595 ************************************ 00:12:19.595 END TEST nvmf_connect_disconnect 00:12:19.595 ************************************ 00:12:19.595 08:46:36 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:19.595 08:46:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:19.595 08:46:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.595 08:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:19.595 ************************************ 00:12:19.595 START TEST nvmf_multitarget 00:12:19.595 ************************************ 00:12:19.595 08:46:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:19.595 * Looking for test storage... 00:12:19.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.595 08:46:36 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.595 08:46:36 -- nvmf/common.sh@7 -- # uname -s 00:12:19.595 08:46:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.595 08:46:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.595 08:46:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.595 08:46:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.595 08:46:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.595 08:46:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.595 08:46:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.595 08:46:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.595 08:46:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.595 08:46:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.855 08:46:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:19.855 08:46:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:19.855 08:46:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.855 08:46:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.855 08:46:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.855 08:46:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.855 08:46:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.855 08:46:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.855 08:46:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.855 08:46:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.855 08:46:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.855 08:46:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.855 08:46:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.855 08:46:36 -- paths/export.sh@5 -- # export PATH 00:12:19.855 08:46:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.855 08:46:36 -- nvmf/common.sh@47 -- # : 0 00:12:19.855 08:46:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.855 08:46:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.855 08:46:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.855 08:46:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.855 08:46:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.855 08:46:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.855 08:46:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.855 08:46:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.855 08:46:36 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:19.855 08:46:36 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:19.855 08:46:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:19.855 08:46:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.855 08:46:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:19.855 08:46:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:19.855 08:46:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:19.855 08:46:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.855 08:46:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.855 08:46:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.855 08:46:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:19.855 08:46:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:19.855 08:46:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.855 08:46:36 -- common/autotest_common.sh@10 -- # set +x 00:12:26.496 08:46:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:26.496 08:46:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.496 08:46:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.496 08:46:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.496 08:46:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.496 08:46:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.496 08:46:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.496 08:46:43 -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.496 08:46:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.496 08:46:43 -- nvmf/common.sh@296 -- # e810=() 00:12:26.496 08:46:43 -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.496 08:46:43 -- nvmf/common.sh@297 -- # x722=() 00:12:26.496 08:46:43 -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.496 08:46:43 -- nvmf/common.sh@298 -- # mlx=() 00:12:26.496 08:46:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.496 08:46:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.496 08:46:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.496 08:46:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.496 08:46:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.496 08:46:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.496 08:46:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.496 08:46:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.496 08:46:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.496 08:46:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.496 08:46:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.496 08:46:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.496 08:46:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.496 08:46:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.496 08:46:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.496 08:46:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.496 08:46:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:26.496 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:26.496 08:46:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.496 08:46:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:26.496 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:26.496 08:46:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.496 08:46:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.496 08:46:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.496 08:46:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:26.496 08:46:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.496 08:46:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:26.496 Found net devices under 0000:af:00.0: cvl_0_0 00:12:26.496 08:46:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.496 08:46:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.496 08:46:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.496 08:46:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:26.496 08:46:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.496 08:46:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:26.496 Found net devices under 0000:af:00.1: cvl_0_1 00:12:26.496 08:46:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.496 08:46:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:26.496 08:46:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:26.496 08:46:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:26.496 08:46:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:26.497 08:46:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:26.497 08:46:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.497 08:46:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.497 08:46:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.497 08:46:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.497 08:46:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.497 08:46:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.497 08:46:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.497 08:46:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.497 08:46:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.497 08:46:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.497 08:46:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.497 08:46:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.497 08:46:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.766 08:46:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.766 08:46:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.766 08:46:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.766 08:46:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.766 08:46:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.766 08:46:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.766 08:46:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:12:26.766 00:12:26.766 --- 10.0.0.2 ping statistics --- 00:12:26.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.766 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:26.766 08:46:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:12:26.766 00:12:26.766 --- 10.0.0.1 ping statistics --- 00:12:26.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.766 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:12:26.766 08:46:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.766 08:46:43 -- nvmf/common.sh@411 -- # return 0 00:12:26.766 08:46:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:26.766 08:46:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.766 08:46:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:26.766 08:46:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:26.766 08:46:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.766 08:46:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:26.766 08:46:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:26.766 08:46:43 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:26.766 08:46:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:26.766 08:46:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:26.766 08:46:43 -- common/autotest_common.sh@10 -- # set +x 00:12:26.766 08:46:43 -- nvmf/common.sh@470 -- # nvmfpid=1965950 00:12:26.766 08:46:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.766 08:46:43 -- nvmf/common.sh@471 -- # waitforlisten 1965950 00:12:26.766 08:46:43 -- common/autotest_common.sh@817 -- # '[' -z 1965950 ']' 00:12:26.766 08:46:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.766 08:46:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:26.766 08:46:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.766 08:46:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:26.766 08:46:43 -- common/autotest_common.sh@10 -- # set +x 00:12:27.025 [2024-04-26 08:46:44.024018] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:12:27.025 [2024-04-26 08:46:44.024064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.025 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.025 [2024-04-26 08:46:44.096824] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.025 [2024-04-26 08:46:44.167811] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.025 [2024-04-26 08:46:44.167847] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.025 [2024-04-26 08:46:44.167857] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.025 [2024-04-26 08:46:44.167865] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.025 [2024-04-26 08:46:44.167888] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.025 [2024-04-26 08:46:44.167928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.025 [2024-04-26 08:46:44.168020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.025 [2024-04-26 08:46:44.168102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.025 [2024-04-26 08:46:44.168103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.593 08:46:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:27.593 08:46:44 -- common/autotest_common.sh@850 -- # return 0 00:12:27.593 08:46:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:27.593 08:46:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:27.593 08:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:27.851 08:46:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.851 08:46:44 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:27.851 08:46:44 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:27.851 08:46:44 -- target/multitarget.sh@21 -- # jq length 00:12:27.851 08:46:44 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:27.851 08:46:44 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:27.851 "nvmf_tgt_1" 00:12:27.851 08:46:45 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:28.110 "nvmf_tgt_2" 00:12:28.110 08:46:45 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:28.110 08:46:45 -- target/multitarget.sh@28 -- # jq length 00:12:28.110 08:46:45 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:28.110 08:46:45 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:28.370 true 00:12:28.370 08:46:45 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:28.370 true 00:12:28.370 08:46:45 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:28.370 08:46:45 -- target/multitarget.sh@35 -- # jq length 00:12:28.370 08:46:45 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:28.370 08:46:45 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:28.370 08:46:45 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:28.370 08:46:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:28.370 08:46:45 -- nvmf/common.sh@117 -- # sync 00:12:28.630 08:46:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.630 08:46:45 -- nvmf/common.sh@120 -- # set +e 00:12:28.630 08:46:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.630 08:46:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.630 rmmod nvme_tcp 00:12:28.630 rmmod nvme_fabrics 00:12:28.630 rmmod nvme_keyring 00:12:28.630 08:46:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.630 08:46:45 -- nvmf/common.sh@124 -- # set -e 00:12:28.630 08:46:45 -- nvmf/common.sh@125 -- # return 0 00:12:28.630 08:46:45 -- nvmf/common.sh@478 -- # '[' -n 1965950 ']' 00:12:28.630 08:46:45 -- nvmf/common.sh@479 -- # killprocess 1965950 00:12:28.630 08:46:45 -- common/autotest_common.sh@936 -- # '[' -z 1965950 ']' 00:12:28.630 08:46:45 -- common/autotest_common.sh@940 -- # kill -0 1965950 00:12:28.630 08:46:45 -- common/autotest_common.sh@941 -- # uname 00:12:28.630 08:46:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:28.630 08:46:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1965950 00:12:28.630 08:46:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:28.630 08:46:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:28.630 08:46:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1965950' 00:12:28.630 killing process with pid 1965950 00:12:28.630 08:46:45 -- common/autotest_common.sh@955 -- # kill 1965950 00:12:28.630 08:46:45 -- common/autotest_common.sh@960 -- # wait 1965950 00:12:28.890 08:46:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:28.890 08:46:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:28.890 08:46:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:28.890 08:46:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.890 08:46:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:28.890 08:46:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.890 08:46:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.890 08:46:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.798 08:46:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.798 00:12:30.798 real 0m11.304s 00:12:30.798 user 0m9.668s 00:12:30.798 sys 0m5.948s 00:12:30.798 08:46:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:30.798 08:46:48 -- common/autotest_common.sh@10 -- # set +x 00:12:30.798 ************************************ 00:12:30.798 END TEST nvmf_multitarget 00:12:30.798 ************************************ 00:12:31.056 08:46:48 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:31.056 08:46:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.056 08:46:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.056 08:46:48 -- common/autotest_common.sh@10 -- # set +x 00:12:31.056 ************************************ 00:12:31.056 START TEST nvmf_rpc 00:12:31.056 ************************************ 00:12:31.056 08:46:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:31.313 * Looking for test storage... 00:12:31.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.314 08:46:48 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.314 08:46:48 -- nvmf/common.sh@7 -- # uname -s 00:12:31.314 08:46:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.314 08:46:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.314 08:46:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.314 08:46:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.314 08:46:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.314 08:46:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.314 08:46:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.314 08:46:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.314 08:46:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.314 08:46:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.314 08:46:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:31.314 08:46:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:31.314 08:46:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.314 08:46:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.314 08:46:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.314 08:46:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.314 08:46:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.314 08:46:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.314 08:46:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.314 08:46:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.314 08:46:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.314 08:46:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.314 08:46:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.314 08:46:48 -- paths/export.sh@5 -- # export PATH 00:12:31.314 08:46:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.314 08:46:48 -- nvmf/common.sh@47 -- # : 0 00:12:31.314 08:46:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.314 08:46:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.314 08:46:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.314 08:46:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.314 08:46:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.314 08:46:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.314 08:46:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.314 08:46:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.314 08:46:48 -- target/rpc.sh@11 -- # loops=5 00:12:31.314 08:46:48 -- target/rpc.sh@23 -- # nvmftestinit 00:12:31.314 08:46:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:31.314 08:46:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.314 08:46:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:31.314 08:46:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:31.314 08:46:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:31.314 08:46:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.314 08:46:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:31.314 08:46:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.314 08:46:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:31.314 08:46:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:31.314 08:46:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.314 08:46:48 -- common/autotest_common.sh@10 -- # set +x 00:12:37.868 08:46:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:37.868 08:46:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.868 08:46:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.869 08:46:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.869 08:46:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.869 08:46:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.869 08:46:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.869 08:46:54 -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.869 08:46:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.869 08:46:54 -- nvmf/common.sh@296 -- # e810=() 00:12:37.869 08:46:54 -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.869 08:46:54 -- nvmf/common.sh@297 -- # x722=() 00:12:37.869 08:46:54 -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.869 08:46:54 -- nvmf/common.sh@298 -- # mlx=() 00:12:37.869 08:46:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.869 08:46:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.869 08:46:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.869 08:46:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.869 08:46:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.869 08:46:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.869 08:46:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.869 08:46:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.869 08:46:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.869 08:46:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.869 08:46:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.869 08:46:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.869 08:46:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.869 08:46:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:37.869 08:46:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.869 08:46:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.869 08:46:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:37.869 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:37.869 08:46:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.869 08:46:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:37.869 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:37.869 08:46:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.869 08:46:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.869 08:46:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.869 08:46:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:37.869 08:46:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.869 08:46:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:37.869 Found net devices under 0000:af:00.0: cvl_0_0 00:12:37.869 08:46:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.869 08:46:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.869 08:46:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.869 08:46:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:37.869 08:46:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.869 08:46:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:37.869 Found net devices under 0000:af:00.1: cvl_0_1 00:12:37.869 08:46:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.869 08:46:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:37.869 08:46:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:37.869 08:46:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:37.869 08:46:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:37.869 08:46:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.869 08:46:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.869 08:46:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.869 08:46:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:37.869 08:46:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.869 08:46:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.869 08:46:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:37.869 08:46:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.869 08:46:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.869 08:46:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:37.869 08:46:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:37.869 08:46:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.869 08:46:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.127 08:46:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.127 08:46:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.127 08:46:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.127 08:46:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.127 08:46:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.127 08:46:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.127 08:46:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:12:38.127 00:12:38.127 --- 10.0.0.2 ping statistics --- 00:12:38.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.127 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:12:38.127 08:46:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:12:38.127 00:12:38.127 --- 10.0.0.1 ping statistics --- 00:12:38.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.127 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:12:38.127 08:46:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.127 08:46:55 -- nvmf/common.sh@411 -- # return 0 00:12:38.127 08:46:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:38.127 08:46:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.127 08:46:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:38.127 08:46:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:38.127 08:46:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.127 08:46:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:38.127 08:46:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:38.127 08:46:55 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:38.127 08:46:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:38.127 08:46:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:38.127 08:46:55 -- common/autotest_common.sh@10 -- # set +x 00:12:38.127 08:46:55 -- nvmf/common.sh@470 -- # nvmfpid=1969963 00:12:38.127 08:46:55 -- nvmf/common.sh@471 -- # waitforlisten 1969963 00:12:38.127 08:46:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.127 08:46:55 -- common/autotest_common.sh@817 -- # '[' -z 1969963 ']' 00:12:38.127 08:46:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.127 08:46:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:38.127 08:46:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.127 08:46:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:38.127 08:46:55 -- common/autotest_common.sh@10 -- # set +x 00:12:38.387 [2024-04-26 08:46:55.413254] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:12:38.387 [2024-04-26 08:46:55.413303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.387 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.387 [2024-04-26 08:46:55.489300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.387 [2024-04-26 08:46:55.561834] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.387 [2024-04-26 08:46:55.561870] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.387 [2024-04-26 08:46:55.561880] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.387 [2024-04-26 08:46:55.561889] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.387 [2024-04-26 08:46:55.561896] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.387 [2024-04-26 08:46:55.561948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.387 [2024-04-26 08:46:55.561964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.387 [2024-04-26 08:46:55.562052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.387 [2024-04-26 08:46:55.562053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.325 08:46:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:39.325 08:46:56 -- common/autotest_common.sh@850 -- # return 0 00:12:39.325 08:46:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:39.325 08:46:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:39.325 08:46:56 -- common/autotest_common.sh@10 -- # set +x 00:12:39.325 08:46:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.325 08:46:56 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:39.325 08:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.325 08:46:56 -- common/autotest_common.sh@10 -- # set +x 00:12:39.325 08:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.325 08:46:56 -- target/rpc.sh@26 -- # stats='{ 00:12:39.325 "tick_rate": 2500000000, 00:12:39.325 "poll_groups": [ 00:12:39.325 { 00:12:39.325 "name": "nvmf_tgt_poll_group_0", 00:12:39.325 "admin_qpairs": 0, 00:12:39.325 "io_qpairs": 0, 00:12:39.325 "current_admin_qpairs": 0, 00:12:39.325 "current_io_qpairs": 0, 00:12:39.325 "pending_bdev_io": 0, 00:12:39.325 "completed_nvme_io": 0, 00:12:39.325 "transports": [] 00:12:39.325 }, 00:12:39.325 { 00:12:39.325 "name": "nvmf_tgt_poll_group_1", 00:12:39.325 "admin_qpairs": 0, 00:12:39.325 "io_qpairs": 0, 00:12:39.325 "current_admin_qpairs": 0, 00:12:39.325 "current_io_qpairs": 0, 00:12:39.325 "pending_bdev_io": 0, 00:12:39.326 "completed_nvme_io": 0, 00:12:39.326 "transports": [] 00:12:39.326 }, 00:12:39.326 { 00:12:39.326 "name": "nvmf_tgt_poll_group_2", 00:12:39.326 "admin_qpairs": 0, 00:12:39.326 "io_qpairs": 0, 00:12:39.326 "current_admin_qpairs": 0, 00:12:39.326 "current_io_qpairs": 0, 00:12:39.326 "pending_bdev_io": 0, 00:12:39.326 "completed_nvme_io": 0, 00:12:39.326 "transports": [] 00:12:39.326 }, 00:12:39.326 { 00:12:39.326 "name": "nvmf_tgt_poll_group_3", 00:12:39.326 "admin_qpairs": 0, 00:12:39.326 "io_qpairs": 0, 00:12:39.326 "current_admin_qpairs": 0, 00:12:39.326 "current_io_qpairs": 0, 00:12:39.326 "pending_bdev_io": 0, 00:12:39.326 "completed_nvme_io": 0, 00:12:39.326 "transports": [] 00:12:39.326 } 00:12:39.326 ] 00:12:39.326 }' 00:12:39.326 08:46:56 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:39.326 08:46:56 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:39.326 08:46:56 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:39.326 08:46:56 -- target/rpc.sh@15 -- # wc -l 00:12:39.326 08:46:56 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:39.326 08:46:56 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:39.326 08:46:56 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:39.326 08:46:56 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.326 08:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.326 08:46:56 -- common/autotest_common.sh@10 -- # set +x 00:12:39.326 [2024-04-26 08:46:56.384610] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.326 08:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.326 08:46:56 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:39.326 08:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.326 08:46:56 -- common/autotest_common.sh@10 -- # set +x 00:12:39.326 08:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.326 08:46:56 -- target/rpc.sh@33 -- # stats='{ 00:12:39.326 "tick_rate": 2500000000, 00:12:39.326 "poll_groups": [ 00:12:39.326 { 00:12:39.326 "name": "nvmf_tgt_poll_group_0", 00:12:39.326 "admin_qpairs": 0, 00:12:39.326 "io_qpairs": 0, 00:12:39.326 "current_admin_qpairs": 0, 00:12:39.326 "current_io_qpairs": 0, 00:12:39.326 "pending_bdev_io": 0, 00:12:39.326 "completed_nvme_io": 0, 00:12:39.326 "transports": [ 00:12:39.326 { 00:12:39.326 "trtype": "TCP" 00:12:39.326 } 00:12:39.326 ] 00:12:39.326 }, 00:12:39.326 { 00:12:39.326 "name": "nvmf_tgt_poll_group_1", 00:12:39.326 "admin_qpairs": 0, 00:12:39.326 "io_qpairs": 0, 00:12:39.326 "current_admin_qpairs": 0, 00:12:39.326 "current_io_qpairs": 0, 00:12:39.326 "pending_bdev_io": 0, 00:12:39.326 "completed_nvme_io": 0, 00:12:39.326 "transports": [ 00:12:39.326 { 00:12:39.326 "trtype": "TCP" 00:12:39.326 } 00:12:39.326 ] 00:12:39.326 }, 00:12:39.326 { 00:12:39.326 "name": "nvmf_tgt_poll_group_2", 00:12:39.326 "admin_qpairs": 0, 00:12:39.326 "io_qpairs": 0, 00:12:39.326 "current_admin_qpairs": 0, 00:12:39.326 "current_io_qpairs": 0, 00:12:39.326 "pending_bdev_io": 0, 00:12:39.326 "completed_nvme_io": 0, 00:12:39.326 "transports": [ 00:12:39.326 { 00:12:39.326 "trtype": "TCP" 00:12:39.326 } 00:12:39.326 ] 00:12:39.326 }, 00:12:39.326 { 00:12:39.326 "name": "nvmf_tgt_poll_group_3", 00:12:39.326 "admin_qpairs": 0, 00:12:39.326 "io_qpairs": 0, 00:12:39.326 "current_admin_qpairs": 0, 00:12:39.326 "current_io_qpairs": 0, 00:12:39.326 "pending_bdev_io": 0, 00:12:39.326 "completed_nvme_io": 0, 00:12:39.326 "transports": [ 00:12:39.326 { 00:12:39.326 "trtype": "TCP" 00:12:39.326 } 00:12:39.326 ] 00:12:39.326 } 00:12:39.326 ] 00:12:39.326 }' 00:12:39.326 08:46:56 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:39.326 08:46:56 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:39.326 08:46:56 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:39.326 08:46:56 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:39.326 08:46:56 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:39.326 08:46:56 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:39.326 08:46:56 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:39.326 08:46:56 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:39.326 08:46:56 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:39.326 08:46:56 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:39.326 08:46:56 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:39.326 08:46:56 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:39.326 08:46:56 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:39.326 08:46:56 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:39.326 08:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.326 08:46:56 -- common/autotest_common.sh@10 -- # set +x 00:12:39.326 Malloc1 00:12:39.326 08:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.326 08:46:56 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:39.326 08:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.326 08:46:56 -- common/autotest_common.sh@10 -- # set +x 00:12:39.326 08:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.326 08:46:56 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:39.326 08:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.326 08:46:56 -- common/autotest_common.sh@10 -- # set +x 00:12:39.326 08:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.326 08:46:56 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:39.326 08:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.326 08:46:56 -- common/autotest_common.sh@10 -- # set +x 00:12:39.326 08:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.326 08:46:56 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.326 08:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.326 08:46:56 -- common/autotest_common.sh@10 -- # set +x 00:12:39.326 [2024-04-26 08:46:56.567660] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.326 08:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.588 08:46:56 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:12:39.588 08:46:56 -- common/autotest_common.sh@638 -- # local es=0 00:12:39.588 08:46:56 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:12:39.588 08:46:56 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:39.588 08:46:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:39.588 08:46:56 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:39.588 08:46:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:39.588 08:46:56 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:39.588 08:46:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:39.588 08:46:56 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:39.588 08:46:56 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:39.588 08:46:56 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:12:39.588 [2024-04-26 08:46:56.596465] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:12:39.588 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:39.588 could not add new controller: failed to write to nvme-fabrics device 00:12:39.588 08:46:56 -- common/autotest_common.sh@641 -- # es=1 00:12:39.588 08:46:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:39.588 08:46:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:39.588 08:46:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:39.588 08:46:56 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:39.588 08:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.588 08:46:56 -- common/autotest_common.sh@10 -- # set +x 00:12:39.588 08:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.588 08:46:56 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.989 08:46:57 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.989 08:46:57 -- common/autotest_common.sh@1184 -- # local i=0 00:12:40.989 08:46:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.989 08:46:57 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:40.989 08:46:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:42.893 08:46:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:42.893 08:46:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:42.893 08:46:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.893 08:46:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:42.893 08:46:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.893 08:46:59 -- common/autotest_common.sh@1194 -- # return 0 00:12:42.893 08:46:59 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.893 08:47:00 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.893 08:47:00 -- common/autotest_common.sh@1205 -- # local i=0 00:12:42.893 08:47:00 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:42.893 08:47:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.893 08:47:00 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:42.893 08:47:00 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.893 08:47:00 -- common/autotest_common.sh@1217 -- # return 0 00:12:42.893 08:47:00 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:42.893 08:47:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:42.893 08:47:00 -- common/autotest_common.sh@10 -- # set +x 00:12:42.893 08:47:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:42.893 08:47:00 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.893 08:47:00 -- common/autotest_common.sh@638 -- # local es=0 00:12:42.893 08:47:00 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.893 08:47:00 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:42.893 08:47:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:42.893 08:47:00 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:42.893 08:47:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:42.893 08:47:00 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:42.893 08:47:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:42.893 08:47:00 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:42.893 08:47:00 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:42.893 08:47:00 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:43.152 [2024-04-26 08:47:00.141920] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:12:43.152 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:43.152 could not add new controller: failed to write to nvme-fabrics device 00:12:43.152 08:47:00 -- common/autotest_common.sh@641 -- # es=1 00:12:43.152 08:47:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:43.152 08:47:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:43.152 08:47:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:43.152 08:47:00 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:43.152 08:47:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:43.152 08:47:00 -- common/autotest_common.sh@10 -- # set +x 00:12:43.152 08:47:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:43.152 08:47:00 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.532 08:47:01 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.532 08:47:01 -- common/autotest_common.sh@1184 -- # local i=0 00:12:44.532 08:47:01 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.532 08:47:01 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:44.532 08:47:01 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:46.438 08:47:03 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:46.438 08:47:03 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:46.438 08:47:03 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.438 08:47:03 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:46.438 08:47:03 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.438 08:47:03 -- common/autotest_common.sh@1194 -- # return 0 00:12:46.438 08:47:03 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.697 08:47:03 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.697 08:47:03 -- common/autotest_common.sh@1205 -- # local i=0 00:12:46.697 08:47:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:46.697 08:47:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.697 08:47:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:46.697 08:47:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.697 08:47:03 -- common/autotest_common.sh@1217 -- # return 0 00:12:46.697 08:47:03 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.697 08:47:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.697 08:47:03 -- common/autotest_common.sh@10 -- # set +x 00:12:46.697 08:47:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.697 08:47:03 -- target/rpc.sh@81 -- # seq 1 5 00:12:46.697 08:47:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.697 08:47:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.697 08:47:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.697 08:47:03 -- common/autotest_common.sh@10 -- # set +x 00:12:46.697 08:47:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.697 08:47:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.697 08:47:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.697 08:47:03 -- common/autotest_common.sh@10 -- # set +x 00:12:46.697 [2024-04-26 08:47:03.827706] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.697 08:47:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.697 08:47:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.697 08:47:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.697 08:47:03 -- common/autotest_common.sh@10 -- # set +x 00:12:46.697 08:47:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.697 08:47:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.697 08:47:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:46.697 08:47:03 -- common/autotest_common.sh@10 -- # set +x 00:12:46.697 08:47:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:46.697 08:47:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.078 08:47:05 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.078 08:47:05 -- common/autotest_common.sh@1184 -- # local i=0 00:12:48.078 08:47:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.078 08:47:05 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:48.078 08:47:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:49.984 08:47:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:49.984 08:47:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:49.984 08:47:07 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.984 08:47:07 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:49.984 08:47:07 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.984 08:47:07 -- common/autotest_common.sh@1194 -- # return 0 00:12:49.984 08:47:07 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.243 08:47:07 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.243 08:47:07 -- common/autotest_common.sh@1205 -- # local i=0 00:12:50.243 08:47:07 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:50.243 08:47:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.243 08:47:07 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:50.243 08:47:07 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.243 08:47:07 -- common/autotest_common.sh@1217 -- # return 0 00:12:50.243 08:47:07 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.243 08:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.243 08:47:07 -- common/autotest_common.sh@10 -- # set +x 00:12:50.243 08:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.243 08:47:07 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.243 08:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.243 08:47:07 -- common/autotest_common.sh@10 -- # set +x 00:12:50.243 08:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.243 08:47:07 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.243 08:47:07 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.243 08:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.243 08:47:07 -- common/autotest_common.sh@10 -- # set +x 00:12:50.243 08:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.243 08:47:07 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.243 08:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.243 08:47:07 -- common/autotest_common.sh@10 -- # set +x 00:12:50.243 [2024-04-26 08:47:07.340017] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.243 08:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.243 08:47:07 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.243 08:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.243 08:47:07 -- common/autotest_common.sh@10 -- # set +x 00:12:50.243 08:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.243 08:47:07 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.243 08:47:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.243 08:47:07 -- common/autotest_common.sh@10 -- # set +x 00:12:50.243 08:47:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.243 08:47:07 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.622 08:47:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.622 08:47:08 -- common/autotest_common.sh@1184 -- # local i=0 00:12:51.622 08:47:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.622 08:47:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:51.622 08:47:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:53.526 08:47:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:53.526 08:47:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:53.526 08:47:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:53.526 08:47:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:53.526 08:47:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:53.526 08:47:10 -- common/autotest_common.sh@1194 -- # return 0 00:12:53.526 08:47:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:53.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.785 08:47:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:53.785 08:47:10 -- common/autotest_common.sh@1205 -- # local i=0 00:12:53.785 08:47:10 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:53.785 08:47:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.785 08:47:10 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.785 08:47:10 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:53.785 08:47:10 -- common/autotest_common.sh@1217 -- # return 0 00:12:53.785 08:47:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:53.785 08:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.785 08:47:10 -- common/autotest_common.sh@10 -- # set +x 00:12:53.785 08:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.785 08:47:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.785 08:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.785 08:47:10 -- common/autotest_common.sh@10 -- # set +x 00:12:53.785 08:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.785 08:47:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:53.785 08:47:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:53.785 08:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.785 08:47:10 -- common/autotest_common.sh@10 -- # set +x 00:12:53.785 08:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.785 08:47:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.785 08:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.785 08:47:10 -- common/autotest_common.sh@10 -- # set +x 00:12:53.785 [2024-04-26 08:47:10.858306] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.785 08:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.785 08:47:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:53.785 08:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.785 08:47:10 -- common/autotest_common.sh@10 -- # set +x 00:12:53.785 08:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.786 08:47:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:53.786 08:47:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:53.786 08:47:10 -- common/autotest_common.sh@10 -- # set +x 00:12:53.786 08:47:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:53.786 08:47:10 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.175 08:47:12 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.175 08:47:12 -- common/autotest_common.sh@1184 -- # local i=0 00:12:55.175 08:47:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.175 08:47:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:55.175 08:47:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:57.080 08:47:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:57.080 08:47:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.080 08:47:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:57.080 08:47:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:57.080 08:47:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.080 08:47:14 -- common/autotest_common.sh@1194 -- # return 0 00:12:57.080 08:47:14 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.339 08:47:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.339 08:47:14 -- common/autotest_common.sh@1205 -- # local i=0 00:12:57.339 08:47:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:57.339 08:47:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.339 08:47:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:57.339 08:47:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.339 08:47:14 -- common/autotest_common.sh@1217 -- # return 0 00:12:57.339 08:47:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.339 08:47:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.339 08:47:14 -- common/autotest_common.sh@10 -- # set +x 00:12:57.339 08:47:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.339 08:47:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.339 08:47:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.339 08:47:14 -- common/autotest_common.sh@10 -- # set +x 00:12:57.339 08:47:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.339 08:47:14 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.339 08:47:14 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.339 08:47:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.339 08:47:14 -- common/autotest_common.sh@10 -- # set +x 00:12:57.339 08:47:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.339 08:47:14 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.339 08:47:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.339 08:47:14 -- common/autotest_common.sh@10 -- # set +x 00:12:57.339 [2024-04-26 08:47:14.409843] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.339 08:47:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.339 08:47:14 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.339 08:47:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.339 08:47:14 -- common/autotest_common.sh@10 -- # set +x 00:12:57.340 08:47:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.340 08:47:14 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.340 08:47:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.340 08:47:14 -- common/autotest_common.sh@10 -- # set +x 00:12:57.340 08:47:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.340 08:47:14 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.720 08:47:15 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.720 08:47:15 -- common/autotest_common.sh@1184 -- # local i=0 00:12:58.720 08:47:15 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.720 08:47:15 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:58.720 08:47:15 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:00.628 08:47:17 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:00.628 08:47:17 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:00.628 08:47:17 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.628 08:47:17 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:00.628 08:47:17 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.628 08:47:17 -- common/autotest_common.sh@1194 -- # return 0 00:13:00.628 08:47:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.888 08:47:17 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.888 08:47:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:00.888 08:47:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:00.888 08:47:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.888 08:47:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:00.888 08:47:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.888 08:47:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:00.888 08:47:17 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.888 08:47:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.888 08:47:17 -- common/autotest_common.sh@10 -- # set +x 00:13:00.888 08:47:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.888 08:47:17 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.888 08:47:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.888 08:47:17 -- common/autotest_common.sh@10 -- # set +x 00:13:00.888 08:47:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.888 08:47:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.888 08:47:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.888 08:47:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.888 08:47:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.888 08:47:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.888 08:47:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.888 08:47:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.888 08:47:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.888 [2024-04-26 08:47:18.018418] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.888 08:47:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.888 08:47:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.888 08:47:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.888 08:47:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.888 08:47:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.888 08:47:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.888 08:47:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:00.888 08:47:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.888 08:47:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:00.888 08:47:18 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.268 08:47:19 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.268 08:47:19 -- common/autotest_common.sh@1184 -- # local i=0 00:13:02.268 08:47:19 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.268 08:47:19 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:02.268 08:47:19 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:04.177 08:47:21 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:04.177 08:47:21 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:04.177 08:47:21 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.177 08:47:21 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:04.177 08:47:21 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.177 08:47:21 -- common/autotest_common.sh@1194 -- # return 0 00:13:04.177 08:47:21 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.437 08:47:21 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.437 08:47:21 -- common/autotest_common.sh@1205 -- # local i=0 00:13:04.437 08:47:21 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:04.437 08:47:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.437 08:47:21 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:04.437 08:47:21 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.437 08:47:21 -- common/autotest_common.sh@1217 -- # return 0 00:13:04.437 08:47:21 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@99 -- # seq 1 5 00:13:04.437 08:47:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.437 08:47:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 [2024-04-26 08:47:21.523683] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.437 08:47:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 [2024-04-26 08:47:21.571772] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.437 08:47:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 [2024-04-26 08:47:21.619918] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.437 08:47:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.437 [2024-04-26 08:47:21.672089] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.437 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.437 08:47:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.437 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.437 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.759 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.759 08:47:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.759 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.759 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.759 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.759 08:47:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.759 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.759 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.759 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.759 08:47:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.759 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.759 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.759 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.759 08:47:21 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:04.759 08:47:21 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.759 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.759 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.759 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.759 08:47:21 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.759 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.759 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.759 [2024-04-26 08:47:21.724291] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.759 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.759 08:47:21 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.759 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.759 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.759 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.759 08:47:21 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.759 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.759 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.759 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.759 08:47:21 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.759 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.759 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.759 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.759 08:47:21 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.759 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.759 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.759 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.759 08:47:21 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:04.759 08:47:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:04.759 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:04.759 08:47:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:04.759 08:47:21 -- target/rpc.sh@110 -- # stats='{ 00:13:04.759 "tick_rate": 2500000000, 00:13:04.759 "poll_groups": [ 00:13:04.759 { 00:13:04.759 "name": "nvmf_tgt_poll_group_0", 00:13:04.759 "admin_qpairs": 2, 00:13:04.759 "io_qpairs": 196, 00:13:04.759 "current_admin_qpairs": 0, 00:13:04.759 "current_io_qpairs": 0, 00:13:04.759 "pending_bdev_io": 0, 00:13:04.759 "completed_nvme_io": 295, 00:13:04.759 "transports": [ 00:13:04.760 { 00:13:04.760 "trtype": "TCP" 00:13:04.760 } 00:13:04.760 ] 00:13:04.760 }, 00:13:04.760 { 00:13:04.760 "name": "nvmf_tgt_poll_group_1", 00:13:04.760 "admin_qpairs": 2, 00:13:04.760 "io_qpairs": 196, 00:13:04.760 "current_admin_qpairs": 0, 00:13:04.760 "current_io_qpairs": 0, 00:13:04.760 "pending_bdev_io": 0, 00:13:04.760 "completed_nvme_io": 275, 00:13:04.760 "transports": [ 00:13:04.760 { 00:13:04.760 "trtype": "TCP" 00:13:04.760 } 00:13:04.760 ] 00:13:04.760 }, 00:13:04.760 { 00:13:04.760 "name": "nvmf_tgt_poll_group_2", 00:13:04.760 "admin_qpairs": 1, 00:13:04.760 "io_qpairs": 196, 00:13:04.760 "current_admin_qpairs": 0, 00:13:04.760 "current_io_qpairs": 0, 00:13:04.760 "pending_bdev_io": 0, 00:13:04.760 "completed_nvme_io": 317, 00:13:04.760 "transports": [ 00:13:04.760 { 00:13:04.760 "trtype": "TCP" 00:13:04.760 } 00:13:04.760 ] 00:13:04.760 }, 00:13:04.760 { 00:13:04.760 "name": "nvmf_tgt_poll_group_3", 00:13:04.760 "admin_qpairs": 2, 00:13:04.760 "io_qpairs": 196, 00:13:04.760 "current_admin_qpairs": 0, 00:13:04.760 "current_io_qpairs": 0, 00:13:04.760 "pending_bdev_io": 0, 00:13:04.760 "completed_nvme_io": 247, 00:13:04.760 "transports": [ 00:13:04.760 { 00:13:04.760 "trtype": "TCP" 00:13:04.760 } 00:13:04.760 ] 00:13:04.760 } 00:13:04.760 ] 00:13:04.760 }' 00:13:04.760 08:47:21 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:04.760 08:47:21 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:04.760 08:47:21 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:04.760 08:47:21 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.760 08:47:21 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:04.760 08:47:21 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:04.760 08:47:21 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:04.760 08:47:21 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:04.760 08:47:21 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.760 08:47:21 -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:13:04.760 08:47:21 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:04.760 08:47:21 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:04.760 08:47:21 -- target/rpc.sh@123 -- # nvmftestfini 00:13:04.760 08:47:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:04.760 08:47:21 -- nvmf/common.sh@117 -- # sync 00:13:04.760 08:47:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.760 08:47:21 -- nvmf/common.sh@120 -- # set +e 00:13:04.760 08:47:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.760 08:47:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.760 rmmod nvme_tcp 00:13:04.760 rmmod nvme_fabrics 00:13:04.760 rmmod nvme_keyring 00:13:04.760 08:47:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.760 08:47:21 -- nvmf/common.sh@124 -- # set -e 00:13:04.760 08:47:21 -- nvmf/common.sh@125 -- # return 0 00:13:04.760 08:47:21 -- nvmf/common.sh@478 -- # '[' -n 1969963 ']' 00:13:04.760 08:47:21 -- nvmf/common.sh@479 -- # killprocess 1969963 00:13:04.760 08:47:21 -- common/autotest_common.sh@936 -- # '[' -z 1969963 ']' 00:13:04.760 08:47:21 -- common/autotest_common.sh@940 -- # kill -0 1969963 00:13:04.760 08:47:21 -- common/autotest_common.sh@941 -- # uname 00:13:04.760 08:47:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:04.760 08:47:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1969963 00:13:04.760 08:47:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:04.760 08:47:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:04.760 08:47:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1969963' 00:13:04.760 killing process with pid 1969963 00:13:04.760 08:47:22 -- common/autotest_common.sh@955 -- # kill 1969963 00:13:04.760 08:47:22 -- common/autotest_common.sh@960 -- # wait 1969963 00:13:05.019 08:47:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:05.019 08:47:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:05.019 08:47:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:05.019 08:47:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.019 08:47:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.019 08:47:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.019 08:47:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.020 08:47:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.559 08:47:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:07.559 00:13:07.559 real 0m36.068s 00:13:07.559 user 1m47.220s 00:13:07.559 sys 0m8.400s 00:13:07.559 08:47:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:07.559 08:47:24 -- common/autotest_common.sh@10 -- # set +x 00:13:07.559 ************************************ 00:13:07.559 END TEST nvmf_rpc 00:13:07.559 ************************************ 00:13:07.559 08:47:24 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:07.559 08:47:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:07.559 08:47:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:07.559 08:47:24 -- common/autotest_common.sh@10 -- # set +x 00:13:07.559 ************************************ 00:13:07.559 START TEST nvmf_invalid 00:13:07.559 ************************************ 00:13:07.559 08:47:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:07.559 * Looking for test storage... 00:13:07.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.559 08:47:24 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.559 08:47:24 -- nvmf/common.sh@7 -- # uname -s 00:13:07.559 08:47:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.559 08:47:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.559 08:47:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.559 08:47:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.559 08:47:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.559 08:47:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.559 08:47:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.559 08:47:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.559 08:47:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.559 08:47:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.559 08:47:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:07.559 08:47:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:07.559 08:47:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.559 08:47:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.559 08:47:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.559 08:47:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.559 08:47:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.559 08:47:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.559 08:47:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.559 08:47:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.559 08:47:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.559 08:47:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.559 08:47:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.559 08:47:24 -- paths/export.sh@5 -- # export PATH 00:13:07.560 08:47:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.560 08:47:24 -- nvmf/common.sh@47 -- # : 0 00:13:07.560 08:47:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.560 08:47:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.560 08:47:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.560 08:47:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.560 08:47:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.560 08:47:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.560 08:47:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.560 08:47:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.560 08:47:24 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:07.560 08:47:24 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.560 08:47:24 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:07.560 08:47:24 -- target/invalid.sh@14 -- # target=foobar 00:13:07.560 08:47:24 -- target/invalid.sh@16 -- # RANDOM=0 00:13:07.560 08:47:24 -- target/invalid.sh@34 -- # nvmftestinit 00:13:07.560 08:47:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:07.560 08:47:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.560 08:47:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:07.560 08:47:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:07.560 08:47:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:07.560 08:47:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.560 08:47:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.560 08:47:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.560 08:47:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:07.560 08:47:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:07.560 08:47:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.560 08:47:24 -- common/autotest_common.sh@10 -- # set +x 00:13:14.180 08:47:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:14.180 08:47:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.180 08:47:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.180 08:47:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.180 08:47:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.180 08:47:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.180 08:47:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.180 08:47:31 -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.180 08:47:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.180 08:47:31 -- nvmf/common.sh@296 -- # e810=() 00:13:14.180 08:47:31 -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.180 08:47:31 -- nvmf/common.sh@297 -- # x722=() 00:13:14.180 08:47:31 -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.180 08:47:31 -- nvmf/common.sh@298 -- # mlx=() 00:13:14.180 08:47:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.180 08:47:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.180 08:47:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.180 08:47:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.180 08:47:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.180 08:47:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.180 08:47:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.180 08:47:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.180 08:47:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.180 08:47:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.180 08:47:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.180 08:47:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.180 08:47:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.180 08:47:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:14.180 08:47:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.180 08:47:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.180 08:47:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:14.180 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:14.180 08:47:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.180 08:47:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:14.180 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:14.180 08:47:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.180 08:47:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.180 08:47:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.180 08:47:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:14.180 08:47:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.180 08:47:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:14.180 Found net devices under 0000:af:00.0: cvl_0_0 00:13:14.180 08:47:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.180 08:47:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.180 08:47:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.180 08:47:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:14.180 08:47:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.180 08:47:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:14.180 Found net devices under 0000:af:00.1: cvl_0_1 00:13:14.180 08:47:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.180 08:47:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:14.180 08:47:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:14.180 08:47:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:14.180 08:47:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:14.180 08:47:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.180 08:47:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.180 08:47:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.180 08:47:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:14.180 08:47:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.180 08:47:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.180 08:47:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:14.180 08:47:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.180 08:47:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.180 08:47:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:14.180 08:47:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:14.180 08:47:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.180 08:47:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.180 08:47:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.180 08:47:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.180 08:47:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:14.180 08:47:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.180 08:47:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.180 08:47:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.180 08:47:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:14.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:13:14.441 00:13:14.441 --- 10.0.0.2 ping statistics --- 00:13:14.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.441 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:13:14.441 08:47:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:13:14.441 00:13:14.441 --- 10.0.0.1 ping statistics --- 00:13:14.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.441 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:13:14.441 08:47:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.441 08:47:31 -- nvmf/common.sh@411 -- # return 0 00:13:14.441 08:47:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:14.441 08:47:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.441 08:47:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:14.441 08:47:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:14.441 08:47:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.441 08:47:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:14.441 08:47:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:14.441 08:47:31 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:14.441 08:47:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:14.441 08:47:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:14.441 08:47:31 -- common/autotest_common.sh@10 -- # set +x 00:13:14.441 08:47:31 -- nvmf/common.sh@470 -- # nvmfpid=1978375 00:13:14.441 08:47:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.441 08:47:31 -- nvmf/common.sh@471 -- # waitforlisten 1978375 00:13:14.441 08:47:31 -- common/autotest_common.sh@817 -- # '[' -z 1978375 ']' 00:13:14.441 08:47:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.441 08:47:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:14.441 08:47:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.441 08:47:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:14.441 08:47:31 -- common/autotest_common.sh@10 -- # set +x 00:13:14.441 [2024-04-26 08:47:31.528588] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:13:14.441 [2024-04-26 08:47:31.528636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.441 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.441 [2024-04-26 08:47:31.602789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.441 [2024-04-26 08:47:31.671777] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.441 [2024-04-26 08:47:31.671818] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.441 [2024-04-26 08:47:31.671828] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.441 [2024-04-26 08:47:31.671836] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.441 [2024-04-26 08:47:31.671843] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.441 [2024-04-26 08:47:31.671893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.441 [2024-04-26 08:47:31.671990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.441 [2024-04-26 08:47:31.672010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.441 [2024-04-26 08:47:31.672012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.379 08:47:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:15.379 08:47:32 -- common/autotest_common.sh@850 -- # return 0 00:13:15.379 08:47:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:15.379 08:47:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:15.379 08:47:32 -- common/autotest_common.sh@10 -- # set +x 00:13:15.379 08:47:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.379 08:47:32 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:15.379 08:47:32 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode27191 00:13:15.379 [2024-04-26 08:47:32.530762] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:15.379 08:47:32 -- target/invalid.sh@40 -- # out='request: 00:13:15.379 { 00:13:15.379 "nqn": "nqn.2016-06.io.spdk:cnode27191", 00:13:15.379 "tgt_name": "foobar", 00:13:15.379 "method": "nvmf_create_subsystem", 00:13:15.379 "req_id": 1 00:13:15.379 } 00:13:15.379 Got JSON-RPC error response 00:13:15.379 response: 00:13:15.379 { 00:13:15.379 "code": -32603, 00:13:15.379 "message": "Unable to find target foobar" 00:13:15.379 }' 00:13:15.379 08:47:32 -- target/invalid.sh@41 -- # [[ request: 00:13:15.379 { 00:13:15.379 "nqn": "nqn.2016-06.io.spdk:cnode27191", 00:13:15.379 "tgt_name": "foobar", 00:13:15.379 "method": "nvmf_create_subsystem", 00:13:15.379 "req_id": 1 00:13:15.379 } 00:13:15.379 Got JSON-RPC error response 00:13:15.379 response: 00:13:15.379 { 00:13:15.379 "code": -32603, 00:13:15.379 "message": "Unable to find target foobar" 00:13:15.379 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:15.379 08:47:32 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:15.379 08:47:32 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26095 00:13:15.638 [2024-04-26 08:47:32.715430] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26095: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:15.638 08:47:32 -- target/invalid.sh@45 -- # out='request: 00:13:15.638 { 00:13:15.638 "nqn": "nqn.2016-06.io.spdk:cnode26095", 00:13:15.638 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:15.638 "method": "nvmf_create_subsystem", 00:13:15.638 "req_id": 1 00:13:15.638 } 00:13:15.638 Got JSON-RPC error response 00:13:15.638 response: 00:13:15.638 { 00:13:15.638 "code": -32602, 00:13:15.638 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:15.638 }' 00:13:15.638 08:47:32 -- target/invalid.sh@46 -- # [[ request: 00:13:15.638 { 00:13:15.638 "nqn": "nqn.2016-06.io.spdk:cnode26095", 00:13:15.638 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:15.638 "method": "nvmf_create_subsystem", 00:13:15.638 "req_id": 1 00:13:15.638 } 00:13:15.638 Got JSON-RPC error response 00:13:15.638 response: 00:13:15.638 { 00:13:15.638 "code": -32602, 00:13:15.638 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:15.638 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:15.638 08:47:32 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:15.638 08:47:32 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14846 00:13:15.898 [2024-04-26 08:47:32.903972] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14846: invalid model number 'SPDK_Controller' 00:13:15.898 08:47:32 -- target/invalid.sh@50 -- # out='request: 00:13:15.898 { 00:13:15.898 "nqn": "nqn.2016-06.io.spdk:cnode14846", 00:13:15.898 "model_number": "SPDK_Controller\u001f", 00:13:15.898 "method": "nvmf_create_subsystem", 00:13:15.898 "req_id": 1 00:13:15.898 } 00:13:15.898 Got JSON-RPC error response 00:13:15.898 response: 00:13:15.898 { 00:13:15.898 "code": -32602, 00:13:15.898 "message": "Invalid MN SPDK_Controller\u001f" 00:13:15.898 }' 00:13:15.898 08:47:32 -- target/invalid.sh@51 -- # [[ request: 00:13:15.898 { 00:13:15.898 "nqn": "nqn.2016-06.io.spdk:cnode14846", 00:13:15.898 "model_number": "SPDK_Controller\u001f", 00:13:15.898 "method": "nvmf_create_subsystem", 00:13:15.898 "req_id": 1 00:13:15.898 } 00:13:15.898 Got JSON-RPC error response 00:13:15.898 response: 00:13:15.898 { 00:13:15.898 "code": -32602, 00:13:15.898 "message": "Invalid MN SPDK_Controller\u001f" 00:13:15.898 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:15.898 08:47:32 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:15.898 08:47:32 -- target/invalid.sh@19 -- # local length=21 ll 00:13:15.898 08:47:32 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:15.898 08:47:32 -- target/invalid.sh@21 -- # local chars 00:13:15.898 08:47:32 -- target/invalid.sh@22 -- # local string 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # printf %x 38 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # string+='&' 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # printf %x 77 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # string+=M 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # printf %x 62 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # string+='>' 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # printf %x 99 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # string+=c 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # printf %x 79 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # string+=O 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # printf %x 60 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # string+='<' 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # printf %x 98 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # string+=b 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # printf %x 86 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:15.898 08:47:32 -- target/invalid.sh@25 -- # string+=V 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:32 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 79 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+=O 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 120 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+=x 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 106 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+=j 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 122 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+=z 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 103 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+=g 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 92 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+='\' 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 95 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+=_ 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 59 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+=';' 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 91 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+='[' 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 47 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+=/ 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 46 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+=. 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 42 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+='*' 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # printf %x 58 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:15.898 08:47:33 -- target/invalid.sh@25 -- # string+=: 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:15.898 08:47:33 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:15.898 08:47:33 -- target/invalid.sh@28 -- # [[ & == \- ]] 00:13:15.898 08:47:33 -- target/invalid.sh@31 -- # echo '&M>cOcOcOcOcO /dev/null' 00:13:18.497 08:47:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.037 08:47:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:21.037 00:13:21.037 real 0m13.288s 00:13:21.037 user 0m20.035s 00:13:21.037 sys 0m6.409s 00:13:21.037 08:47:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:21.037 08:47:37 -- common/autotest_common.sh@10 -- # set +x 00:13:21.037 ************************************ 00:13:21.037 END TEST nvmf_invalid 00:13:21.037 ************************************ 00:13:21.037 08:47:37 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:21.037 08:47:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:21.037 08:47:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.037 08:47:37 -- common/autotest_common.sh@10 -- # set +x 00:13:21.038 ************************************ 00:13:21.038 START TEST nvmf_abort 00:13:21.038 ************************************ 00:13:21.038 08:47:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:21.038 * Looking for test storage... 00:13:21.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.038 08:47:38 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.038 08:47:38 -- nvmf/common.sh@7 -- # uname -s 00:13:21.038 08:47:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.038 08:47:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.038 08:47:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.038 08:47:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.038 08:47:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.038 08:47:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.038 08:47:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.038 08:47:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.038 08:47:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.038 08:47:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.038 08:47:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:21.038 08:47:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:21.038 08:47:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.038 08:47:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.038 08:47:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.038 08:47:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.038 08:47:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.038 08:47:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.038 08:47:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.038 08:47:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.038 08:47:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.038 08:47:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.038 08:47:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.038 08:47:38 -- paths/export.sh@5 -- # export PATH 00:13:21.038 08:47:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.038 08:47:38 -- nvmf/common.sh@47 -- # : 0 00:13:21.038 08:47:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.038 08:47:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.038 08:47:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.038 08:47:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.038 08:47:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.038 08:47:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.038 08:47:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.038 08:47:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.038 08:47:38 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:21.038 08:47:38 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:21.038 08:47:38 -- target/abort.sh@14 -- # nvmftestinit 00:13:21.038 08:47:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:21.038 08:47:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.038 08:47:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:21.038 08:47:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:21.038 08:47:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:21.038 08:47:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.038 08:47:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.038 08:47:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.038 08:47:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:21.038 08:47:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:21.038 08:47:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:21.038 08:47:38 -- common/autotest_common.sh@10 -- # set +x 00:13:27.616 08:47:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:27.616 08:47:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:27.616 08:47:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:27.616 08:47:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:27.616 08:47:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:27.616 08:47:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:27.616 08:47:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:27.616 08:47:44 -- nvmf/common.sh@295 -- # net_devs=() 00:13:27.616 08:47:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:27.616 08:47:44 -- nvmf/common.sh@296 -- # e810=() 00:13:27.616 08:47:44 -- nvmf/common.sh@296 -- # local -ga e810 00:13:27.616 08:47:44 -- nvmf/common.sh@297 -- # x722=() 00:13:27.616 08:47:44 -- nvmf/common.sh@297 -- # local -ga x722 00:13:27.616 08:47:44 -- nvmf/common.sh@298 -- # mlx=() 00:13:27.616 08:47:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:27.616 08:47:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.616 08:47:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.616 08:47:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.616 08:47:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.616 08:47:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.616 08:47:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.616 08:47:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.616 08:47:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.616 08:47:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.616 08:47:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.616 08:47:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.616 08:47:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:27.616 08:47:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:27.616 08:47:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:27.616 08:47:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.616 08:47:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:27.616 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:27.616 08:47:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:27.616 08:47:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:27.616 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:27.616 08:47:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:27.616 08:47:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.616 08:47:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.616 08:47:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:27.616 08:47:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.616 08:47:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:27.616 Found net devices under 0000:af:00.0: cvl_0_0 00:13:27.616 08:47:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.616 08:47:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:27.616 08:47:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.616 08:47:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:27.616 08:47:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.616 08:47:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:27.616 Found net devices under 0000:af:00.1: cvl_0_1 00:13:27.616 08:47:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.616 08:47:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:27.616 08:47:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:27.616 08:47:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:27.616 08:47:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:27.616 08:47:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.616 08:47:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.616 08:47:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.616 08:47:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:27.616 08:47:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.616 08:47:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.616 08:47:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:27.616 08:47:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.616 08:47:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.616 08:47:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:27.616 08:47:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:27.616 08:47:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.616 08:47:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.616 08:47:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.616 08:47:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.616 08:47:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:27.616 08:47:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.616 08:47:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.876 08:47:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.876 08:47:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:27.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:13:27.876 00:13:27.876 --- 10.0.0.2 ping statistics --- 00:13:27.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.876 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:27.876 08:47:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:13:27.876 00:13:27.876 --- 10.0.0.1 ping statistics --- 00:13:27.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.876 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:13:27.876 08:47:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.876 08:47:44 -- nvmf/common.sh@411 -- # return 0 00:13:27.876 08:47:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:27.876 08:47:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.876 08:47:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:27.876 08:47:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:27.876 08:47:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.876 08:47:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:27.876 08:47:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:27.876 08:47:44 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:27.876 08:47:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:27.876 08:47:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:27.876 08:47:44 -- common/autotest_common.sh@10 -- # set +x 00:13:27.876 08:47:44 -- nvmf/common.sh@470 -- # nvmfpid=1983039 00:13:27.876 08:47:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:27.876 08:47:44 -- nvmf/common.sh@471 -- # waitforlisten 1983039 00:13:27.876 08:47:44 -- common/autotest_common.sh@817 -- # '[' -z 1983039 ']' 00:13:27.876 08:47:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.876 08:47:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:27.876 08:47:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.876 08:47:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:27.876 08:47:44 -- common/autotest_common.sh@10 -- # set +x 00:13:27.876 [2024-04-26 08:47:44.997619] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:13:27.876 [2024-04-26 08:47:44.997667] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.876 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.876 [2024-04-26 08:47:45.069974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:28.136 [2024-04-26 08:47:45.142315] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.136 [2024-04-26 08:47:45.142350] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.136 [2024-04-26 08:47:45.142360] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.136 [2024-04-26 08:47:45.142369] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.136 [2024-04-26 08:47:45.142377] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.136 [2024-04-26 08:47:45.142482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.136 [2024-04-26 08:47:45.142606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.136 [2024-04-26 08:47:45.142608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.703 08:47:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:28.703 08:47:45 -- common/autotest_common.sh@850 -- # return 0 00:13:28.703 08:47:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:28.703 08:47:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:28.703 08:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:28.703 08:47:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.703 08:47:45 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:28.703 08:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.703 08:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:28.703 [2024-04-26 08:47:45.857784] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.703 08:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.703 08:47:45 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:28.703 08:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.703 08:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:28.703 Malloc0 00:13:28.703 08:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.703 08:47:45 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:28.703 08:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.703 08:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:28.703 Delay0 00:13:28.703 08:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.703 08:47:45 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:28.703 08:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.703 08:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:28.703 08:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.703 08:47:45 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:28.703 08:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.703 08:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:28.703 08:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.703 08:47:45 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:28.703 08:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.703 08:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:28.704 [2024-04-26 08:47:45.936857] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.704 08:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.704 08:47:45 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:28.704 08:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.704 08:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:28.704 08:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.704 08:47:45 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:28.963 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.963 [2024-04-26 08:47:46.045278] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:31.493 Initializing NVMe Controllers 00:13:31.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:31.493 controller IO queue size 128 less than required 00:13:31.493 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:31.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:31.493 Initialization complete. Launching workers. 00:13:31.493 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 116, failed: 42250 00:13:31.493 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42304, failed to submit 62 00:13:31.493 success 42254, unsuccess 50, failed 0 00:13:31.493 08:47:48 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:31.493 08:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.493 08:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:31.493 08:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.493 08:47:48 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:31.493 08:47:48 -- target/abort.sh@38 -- # nvmftestfini 00:13:31.493 08:47:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:31.493 08:47:48 -- nvmf/common.sh@117 -- # sync 00:13:31.493 08:47:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:31.493 08:47:48 -- nvmf/common.sh@120 -- # set +e 00:13:31.493 08:47:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:31.493 08:47:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:31.493 rmmod nvme_tcp 00:13:31.493 rmmod nvme_fabrics 00:13:31.493 rmmod nvme_keyring 00:13:31.493 08:47:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:31.493 08:47:48 -- nvmf/common.sh@124 -- # set -e 00:13:31.493 08:47:48 -- nvmf/common.sh@125 -- # return 0 00:13:31.493 08:47:48 -- nvmf/common.sh@478 -- # '[' -n 1983039 ']' 00:13:31.493 08:47:48 -- nvmf/common.sh@479 -- # killprocess 1983039 00:13:31.493 08:47:48 -- common/autotest_common.sh@936 -- # '[' -z 1983039 ']' 00:13:31.493 08:47:48 -- common/autotest_common.sh@940 -- # kill -0 1983039 00:13:31.493 08:47:48 -- common/autotest_common.sh@941 -- # uname 00:13:31.493 08:47:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:31.493 08:47:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1983039 00:13:31.493 08:47:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:31.493 08:47:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:31.493 08:47:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1983039' 00:13:31.493 killing process with pid 1983039 00:13:31.493 08:47:48 -- common/autotest_common.sh@955 -- # kill 1983039 00:13:31.493 08:47:48 -- common/autotest_common.sh@960 -- # wait 1983039 00:13:31.493 08:47:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:31.493 08:47:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:31.493 08:47:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:31.493 08:47:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:31.493 08:47:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:31.493 08:47:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.493 08:47:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.493 08:47:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.029 08:47:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:34.029 00:13:34.029 real 0m12.693s 00:13:34.029 user 0m13.722s 00:13:34.029 sys 0m6.445s 00:13:34.029 08:47:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:34.029 08:47:50 -- common/autotest_common.sh@10 -- # set +x 00:13:34.029 ************************************ 00:13:34.029 END TEST nvmf_abort 00:13:34.029 ************************************ 00:13:34.029 08:47:50 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:34.029 08:47:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:34.029 08:47:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:34.029 08:47:50 -- common/autotest_common.sh@10 -- # set +x 00:13:34.029 ************************************ 00:13:34.029 START TEST nvmf_ns_hotplug_stress 00:13:34.029 ************************************ 00:13:34.029 08:47:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:34.029 * Looking for test storage... 00:13:34.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.029 08:47:50 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.029 08:47:50 -- nvmf/common.sh@7 -- # uname -s 00:13:34.029 08:47:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.029 08:47:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.029 08:47:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.029 08:47:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.029 08:47:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.029 08:47:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.029 08:47:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.029 08:47:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.029 08:47:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.029 08:47:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.029 08:47:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:34.029 08:47:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:34.029 08:47:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.029 08:47:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.029 08:47:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.029 08:47:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.029 08:47:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.029 08:47:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.029 08:47:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.029 08:47:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.029 08:47:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.029 08:47:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.029 08:47:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.029 08:47:51 -- paths/export.sh@5 -- # export PATH 00:13:34.029 08:47:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.030 08:47:51 -- nvmf/common.sh@47 -- # : 0 00:13:34.030 08:47:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:34.030 08:47:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:34.030 08:47:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.030 08:47:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.030 08:47:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.030 08:47:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:34.030 08:47:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:34.030 08:47:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:34.030 08:47:51 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:34.030 08:47:51 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:13:34.030 08:47:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:34.030 08:47:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.030 08:47:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:34.030 08:47:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:34.030 08:47:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:34.030 08:47:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.030 08:47:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.030 08:47:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.030 08:47:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:34.030 08:47:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:34.030 08:47:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:34.030 08:47:51 -- common/autotest_common.sh@10 -- # set +x 00:13:40.601 08:47:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:40.601 08:47:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.601 08:47:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.601 08:47:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.601 08:47:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.601 08:47:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.601 08:47:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.601 08:47:57 -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.601 08:47:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.601 08:47:57 -- nvmf/common.sh@296 -- # e810=() 00:13:40.601 08:47:57 -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.601 08:47:57 -- nvmf/common.sh@297 -- # x722=() 00:13:40.601 08:47:57 -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.601 08:47:57 -- nvmf/common.sh@298 -- # mlx=() 00:13:40.601 08:47:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.601 08:47:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.601 08:47:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.601 08:47:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.601 08:47:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.601 08:47:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.601 08:47:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.601 08:47:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.601 08:47:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.601 08:47:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.601 08:47:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.601 08:47:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.601 08:47:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.601 08:47:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:40.601 08:47:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.601 08:47:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.601 08:47:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:40.601 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:40.601 08:47:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.601 08:47:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:40.601 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:40.601 08:47:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.601 08:47:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.602 08:47:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:40.602 08:47:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:40.602 08:47:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.602 08:47:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.602 08:47:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:40.602 08:47:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.602 08:47:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:40.602 Found net devices under 0000:af:00.0: cvl_0_0 00:13:40.602 08:47:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.602 08:47:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.602 08:47:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.602 08:47:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:40.602 08:47:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.602 08:47:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:40.602 Found net devices under 0000:af:00.1: cvl_0_1 00:13:40.602 08:47:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.602 08:47:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:40.602 08:47:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:40.602 08:47:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:40.602 08:47:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:40.602 08:47:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:40.602 08:47:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.602 08:47:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.602 08:47:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.602 08:47:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:40.602 08:47:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.602 08:47:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.602 08:47:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:40.602 08:47:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.602 08:47:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.602 08:47:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:40.602 08:47:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:40.602 08:47:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.602 08:47:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.602 08:47:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.602 08:47:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.602 08:47:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:40.602 08:47:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.863 08:47:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.863 08:47:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.863 08:47:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:40.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:13:40.863 00:13:40.863 --- 10.0.0.2 ping statistics --- 00:13:40.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.863 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:13:40.863 08:47:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:13:40.863 00:13:40.863 --- 10.0.0.1 ping statistics --- 00:13:40.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.863 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:13:40.863 08:47:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.863 08:47:57 -- nvmf/common.sh@411 -- # return 0 00:13:40.863 08:47:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:40.863 08:47:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.863 08:47:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:40.863 08:47:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:40.863 08:47:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.863 08:47:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:40.863 08:47:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:40.863 08:47:58 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:13:40.863 08:47:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:40.863 08:47:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:40.863 08:47:58 -- common/autotest_common.sh@10 -- # set +x 00:13:40.863 08:47:58 -- nvmf/common.sh@470 -- # nvmfpid=1987309 00:13:40.863 08:47:58 -- nvmf/common.sh@471 -- # waitforlisten 1987309 00:13:40.863 08:47:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:40.863 08:47:58 -- common/autotest_common.sh@817 -- # '[' -z 1987309 ']' 00:13:40.864 08:47:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.864 08:47:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:40.864 08:47:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.864 08:47:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:40.864 08:47:58 -- common/autotest_common.sh@10 -- # set +x 00:13:40.864 [2024-04-26 08:47:58.078760] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:13:40.864 [2024-04-26 08:47:58.078810] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.122 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.122 [2024-04-26 08:47:58.154626] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:41.122 [2024-04-26 08:47:58.225635] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.122 [2024-04-26 08:47:58.225670] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.122 [2024-04-26 08:47:58.225679] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.122 [2024-04-26 08:47:58.225688] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.122 [2024-04-26 08:47:58.225695] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.122 [2024-04-26 08:47:58.225796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.122 [2024-04-26 08:47:58.225898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.122 [2024-04-26 08:47:58.225900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.692 08:47:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:41.692 08:47:58 -- common/autotest_common.sh@850 -- # return 0 00:13:41.692 08:47:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:41.692 08:47:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:41.692 08:47:58 -- common/autotest_common.sh@10 -- # set +x 00:13:41.692 08:47:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.692 08:47:58 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:13:41.692 08:47:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:41.951 [2024-04-26 08:47:59.082560] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.951 08:47:59 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:42.209 08:47:59 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.209 [2024-04-26 08:47:59.436212] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.468 08:47:59 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:42.468 08:47:59 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:42.727 Malloc0 00:13:42.727 08:47:59 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:42.984 Delay0 00:13:42.984 08:48:00 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.985 08:48:00 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:43.243 NULL1 00:13:43.243 08:48:00 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:43.502 08:48:00 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:43.502 08:48:00 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1987903 00:13:43.502 08:48:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:43.502 08:48:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.502 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.761 08:48:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.761 08:48:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:13:43.761 08:48:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:44.020 true 00:13:44.020 08:48:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:44.020 08:48:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.280 08:48:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.280 08:48:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:13:44.280 08:48:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:44.540 true 00:13:44.540 08:48:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:44.540 08:48:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.918 Read completed with error (sct=0, sc=11) 00:13:45.918 08:48:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.918 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.918 08:48:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:13:45.918 08:48:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:46.178 true 00:13:46.178 08:48:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:46.178 08:48:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.115 08:48:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.115 08:48:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:13:47.115 08:48:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:47.374 true 00:13:47.374 08:48:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:47.374 08:48:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.374 08:48:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.633 08:48:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:13:47.633 08:48:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:47.892 true 00:13:47.892 08:48:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:47.892 08:48:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.892 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.151 08:48:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.151 08:48:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:13:48.151 08:48:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:48.410 true 00:13:48.410 08:48:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:48.410 08:48:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.348 08:48:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.348 08:48:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:13:49.348 08:48:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:49.607 true 00:13:49.607 08:48:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:49.607 08:48:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.866 08:48:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.866 08:48:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:13:49.866 08:48:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:50.126 true 00:13:50.126 08:48:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:50.126 08:48:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.385 08:48:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.385 08:48:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:13:50.385 08:48:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:50.646 true 00:13:50.646 08:48:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:50.646 08:48:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.906 08:48:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.906 08:48:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:13:50.906 08:48:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:51.170 true 00:13:51.170 08:48:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:51.170 08:48:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.429 08:48:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.688 08:48:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:13:51.688 08:48:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:51.688 true 00:13:51.688 08:48:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:51.688 08:48:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.946 08:48:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.205 08:48:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:13:52.205 08:48:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:52.205 true 00:13:52.205 08:48:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:52.205 08:48:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.584 08:48:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.584 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.584 08:48:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:13:53.584 08:48:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:53.841 true 00:13:53.842 08:48:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:53.842 08:48:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.777 08:48:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.777 08:48:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:13:54.777 08:48:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:55.036 true 00:13:55.036 08:48:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:55.036 08:48:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.299 08:48:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.299 08:48:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:13:55.299 08:48:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:55.558 true 00:13:55.558 08:48:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:55.558 08:48:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.817 08:48:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.817 08:48:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:13:55.817 08:48:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:56.075 true 00:13:56.075 08:48:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:56.075 08:48:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.333 08:48:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.333 08:48:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:13:56.333 08:48:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:56.591 true 00:13:56.592 08:48:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:56.592 08:48:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.968 08:48:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.968 08:48:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:13:57.968 08:48:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:58.228 true 00:13:58.228 08:48:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:58.228 08:48:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.165 08:48:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.165 08:48:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:13:59.165 08:48:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:59.424 true 00:13:59.424 08:48:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:59.424 08:48:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.424 08:48:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.683 08:48:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:13:59.683 08:48:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:59.943 true 00:13:59.943 08:48:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:13:59.943 08:48:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.352 08:48:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:01.352 08:48:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:14:01.352 08:48:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:01.352 true 00:14:01.352 08:48:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:01.352 08:48:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.300 08:48:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.300 08:48:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:14:02.300 08:48:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:02.558 true 00:14:02.558 08:48:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:02.558 08:48:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.817 08:48:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.076 08:48:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:14:03.076 08:48:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:03.076 true 00:14:03.076 08:48:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:03.076 08:48:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.451 08:48:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.452 08:48:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:14:04.452 08:48:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:04.710 true 00:14:04.710 08:48:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:04.710 08:48:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.655 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.655 08:48:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.655 08:48:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:14:05.655 08:48:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:05.913 true 00:14:05.913 08:48:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:05.913 08:48:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.171 08:48:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.171 08:48:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:14:06.171 08:48:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:06.429 true 00:14:06.429 08:48:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:06.429 08:48:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.687 08:48:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.945 08:48:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:14:06.945 08:48:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:06.945 true 00:14:06.945 08:48:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:06.945 08:48:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.204 08:48:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.467 08:48:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:14:07.467 08:48:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:07.467 true 00:14:07.468 08:48:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:07.468 08:48:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.730 08:48:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.009 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.009 [2024-04-26 08:48:25.045281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.045997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.009 [2024-04-26 08:48:25.046839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.046894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.046940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.046982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.047957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.048000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.048044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.048084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.048638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.048694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.048744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.048787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.048841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.048885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.048932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.048980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.049966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.050956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.051010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.051066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.051114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.051168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.051223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.051269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.051318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.051361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.010 [2024-04-26 08:48:25.051410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.051461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.051505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.051551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.051600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.052998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.053992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.054941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.055549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.055609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.055655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.055701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.055754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.055795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.055842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.055893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.055937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.055983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.056029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.056076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.056120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.011 [2024-04-26 08:48:25.056171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.012 [2024-04-26 08:48:25.056266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.056970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.057979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.058950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.059958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.060007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.060049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.060096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.060140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.060177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.060223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.060268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.012 [2024-04-26 08:48:25.060321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.060998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.061892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.062996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.063970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.013 [2024-04-26 08:48:25.064753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.064794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.064840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.064880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.064919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.064968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.065969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.066955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.067953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.068003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.068046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.068093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.068131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.014 [2024-04-26 08:48:25.068170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.068218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.068260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.068301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.068348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.068396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.068444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.068496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.068552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.068600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.068653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.068706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.069985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.070975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.071990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.072989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.015 [2024-04-26 08:48:25.073661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.073708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.073756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.073808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.073860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.073909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.073960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.074977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.075737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 08:48:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:14:08.016 [2024-04-26 08:48:25.076261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 08:48:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:08.016 [2024-04-26 08:48:25.076628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.076995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.016 [2024-04-26 08:48:25.077589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.077630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.077675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.077710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.077761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.077811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.077851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.077903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.077943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.077984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.078953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.079005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.079062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.079107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.079640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.079695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.079743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.079789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.079843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.079889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.079935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.079981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.080969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.081980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.017 [2024-04-26 08:48:25.082605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.083960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.084973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.085994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.086037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.086561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.086620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.086663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.086708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.086750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.086800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.086850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.086893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.086940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.086991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.087960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.088007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.088057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.018 [2024-04-26 08:48:25.088109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.088993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.089512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.090974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.091960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.092010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.092055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.092092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.092146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.092187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.092236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.092279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.092320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.092367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.092412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.019 [2024-04-26 08:48:25.092463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.092968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.093018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.093545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.093600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.093652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.093698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.093755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.093806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.093854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.093905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.093953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.094994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.095991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.096043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.096099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.096144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.096190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.096238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.096288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.096339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.096388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.096436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.096488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.096974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.020 [2024-04-26 08:48:25.097897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.097950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.098959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.099949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.100953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.101981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.102960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.103003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.103043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.021 [2024-04-26 08:48:25.103089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.103130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.103173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.103214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.103255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.103294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.103341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.103392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.103890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.103938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.103995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.104994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.105962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.106872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.107984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.022 [2024-04-26 08:48:25.108601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.108656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.108711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.108757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.108804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.108849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.108895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.108942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.108995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.109996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.110036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.110075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.110119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.110159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.110205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.110250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.110844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.110897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.110944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.110995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.023 [2024-04-26 08:48:25.111299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.111993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.112966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.023 [2024-04-26 08:48:25.113757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.113802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.114969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.115999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.116984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.117033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.117086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.117134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.117181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.117230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.117277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.117790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.117837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.117883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.117930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.117970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.024 [2024-04-26 08:48:25.118687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.118729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.118771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.118811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.118853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.118899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.118948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.118994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.119955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.120650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.121977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.122986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.025 [2024-04-26 08:48:25.123872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.123917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.123958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.123996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.124042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.124090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.124133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.124179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.124713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.124768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.124820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.124872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.124927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.124977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.125969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.126948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.127746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.128955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.129006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.129058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.129105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.129152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.129203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.129254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.129307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.129353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.129402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.129455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.026 [2024-04-26 08:48:25.129510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.129555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.129603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.129653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.129705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.129753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.129806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.129854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.129900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.129944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.129990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.130959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.131966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.132966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.133997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.027 [2024-04-26 08:48:25.134615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.134663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.135997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.136982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.137994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.138042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.138564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.138622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.138665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.138706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.138756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.138802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.138841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.138888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.138936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.138978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.139024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.139069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.139114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.139155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.139203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.139254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.139306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.139352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.139403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.028 [2024-04-26 08:48:25.139456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.139506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.139555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.139602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.139653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.139701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.139750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.139801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.139851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.139903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.139941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.139983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.140951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.141559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.142973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.143988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.029 [2024-04-26 08:48:25.144556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.144604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.144646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.144686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.144735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.144781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.144830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.144884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.144940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.145980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.146965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.147967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.148018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.148067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.148115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.148174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.148220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.148266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.148319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.148369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.148421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.148943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.148988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.149956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.150011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.150065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.150112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.030 [2024-04-26 08:48:25.150160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.150961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.151769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.152984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.153997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.154975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.155029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.155080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.155130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.155179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.031 [2024-04-26 08:48:25.155228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.155273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.155794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.155845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.155890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.155931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.155979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.156965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.157963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.158762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.159967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.160960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.161006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.032 [2024-04-26 08:48:25.161054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.161975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.162014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.162055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.162103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.162153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.162190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.162776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.162828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.162875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.162929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.162980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.163997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.164975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.165721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.166238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.033 [2024-04-26 08:48:25.166289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.034 [2024-04-26 08:48:25.166541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.166992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.167963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.168977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.169024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.169071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.169117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.169165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.169210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.169762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.169814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.169863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.169914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.169956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.170964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.034 [2024-04-26 08:48:25.171884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.171928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.171970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.172673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.173995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.174975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.175988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.176995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.035 [2024-04-26 08:48:25.177964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.178966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.179611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.180977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.181966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.182977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.183999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.184048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.184088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.184131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.184178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.184225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.184273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.184318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.184361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.036 [2024-04-26 08:48:25.184410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.184460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.184511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.184559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.184615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.184665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.184717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.184772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.184820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.184876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.184925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.184975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.185968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.186635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.187986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.188965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.037 [2024-04-26 08:48:25.189740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.189789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.189836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.189892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.189940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.189991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.190043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.190094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.190147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.190199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.190251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.190300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.190812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.190864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.190913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.190946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.190993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.191999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.192979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.193674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.194961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.195949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.196000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.196056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.038 [2024-04-26 08:48:25.196102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.196959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.197985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.198989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.199993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.200530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.201997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.202046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.202097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.202146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.202196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.202242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.202286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.202331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.202365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.202408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.039 [2024-04-26 08:48:25.202461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.202993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.203948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.204001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.204535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.204593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.204642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.204693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.204745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.204795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.204844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.204896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.204950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.204999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.205953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.206978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.207998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.208953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.209003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.209053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.209105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.040 [2024-04-26 08:48:25.209150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.209973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.210955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.211485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.211546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.211597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.211645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.211696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.211749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.211795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.211846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.211898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.211953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.212992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.213990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.214040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.214089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.214136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.214191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.214236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.214288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.214336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.214387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.214433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.214933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.214978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.215021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.215066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.041 [2024-04-26 08:48:25.215118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.215988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.216999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.217935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.218445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.218505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.218556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.218606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.218655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.218705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.218762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.218808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.218859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.218912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.218962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.219965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.220948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.221004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.221048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.221098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.221145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.221197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.221250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.221300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.221348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.221397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.221890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.042 [2024-04-26 08:48:25.221938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.221982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.222020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.222063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.222113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.222159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.222212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.222261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.042 [2024-04-26 08:48:25.222303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.222949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.223983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.224843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.225435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.225500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.225551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.225598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.225649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.225696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.225748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.225792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.225846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.225896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.225950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.226984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.227987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.228031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.228074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.228119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.228167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.228217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.228267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.228316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.228370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.228900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.228952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.043 [2024-04-26 08:48:25.229592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.229641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.229685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.229727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.229778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.229817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.229855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.229897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.229936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.229983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.230994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.231938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.232987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.233996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.234994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.235039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.235080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.235122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.235169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.235207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.235248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.235290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.044 [2024-04-26 08:48:25.235331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.045 [2024-04-26 08:48:25.235372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.235874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.235928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.235977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.236961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.237974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.238893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.239460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.239511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.239552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.239601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.239635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.239675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.239719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.327 [2024-04-26 08:48:25.239762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.239804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.239851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.239895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.239945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.239988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.240983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.241979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.242025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.242075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.242119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.242160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.242203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.242243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.242279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.242322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.242892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.242939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.242985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.243957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.244984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.328 [2024-04-26 08:48:25.245721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.245771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.245822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.245872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.245920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.246994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 true 00:14:08.329 [2024-04-26 08:48:25.247592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.247985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.248997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.249046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.249094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.249145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.249194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.249246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.249299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.249344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.249843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.249887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.249934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.249979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.250993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.251963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.252002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.329 [2024-04-26 08:48:25.252043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.252806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.253954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.254971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.255977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.256996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.257950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.258000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.258049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.258099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.258147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.258201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.258258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.330 [2024-04-26 08:48:25.258308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.258952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.259676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.260971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.261961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.262982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.263034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.263086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.263136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.263664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.263714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.263768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.331 [2024-04-26 08:48:25.263825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.263874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.263927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.263975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.264988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.265962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.266588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.267951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.268953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.332 [2024-04-26 08:48:25.269689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.269737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.269790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.269831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.269875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.269916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.269959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.270002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.270044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.270613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.270657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.270696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.270745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.270799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.270847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.270892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.270941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.271962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 08:48:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:08.333 [2024-04-26 08:48:25.272319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 08:48:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.333 [2024-04-26 08:48:25.272668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.272970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.273617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.333 [2024-04-26 08:48:25.274912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.274993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.333 [2024-04-26 08:48:25.275937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.275985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.276971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.277954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.278973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.279993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.280520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.281985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.282035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.282078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.282113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.282161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.282204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.282246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.282289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.282337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.282384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.334 [2024-04-26 08:48:25.282434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.282966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.283999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.284042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.284092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.284610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.284665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.284713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.284762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.284810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.284863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.284917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.284967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.285969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.286951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.335 [2024-04-26 08:48:25.287533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.287577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.288982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.289978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.290984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.291507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.291557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.291610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.291655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.291704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.291750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.291797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.291852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.291900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.291947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.291994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.292968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.336 [2024-04-26 08:48:25.293826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.293873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.293919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.293968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.294019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.294072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.294122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.294175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.294221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.294272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.294325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.294370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.294424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.294477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.294992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.295959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.296987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.297906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.298954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.299992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.300038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.300069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.300113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.300155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.337 [2024-04-26 08:48:25.300208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.300945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.301002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.301052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.301101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.301152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.301196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.301244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.301296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.301343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.301861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.301912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.301959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.302975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.303989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.304736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.305953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.306008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.306052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.338 [2024-04-26 08:48:25.306098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.306976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.307975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.308017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.308057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.308105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.308657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.308714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.308759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.308806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.308866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.308913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.308966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.309991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.310971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.339 [2024-04-26 08:48:25.311604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.312981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.313961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.314992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.315988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.316961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.340 [2024-04-26 08:48:25.317908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.317955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.318972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.319973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.320997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.321908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.322988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.323980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.324026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.324068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.324108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.341 [2024-04-26 08:48:25.324149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.324993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.325042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.325094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.325143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.325195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.325245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.325297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.325352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.325402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.325931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.325980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.326994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.327970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.328935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.329483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.329531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.329569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.329615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.329659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.329715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.329762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.329811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.342 [2024-04-26 08:48:25.329859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.329910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.329966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.343 [2024-04-26 08:48:25.330062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.330958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.331964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.332011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.332058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.332101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.332150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.332201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.332262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.332311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.332359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.332414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.332922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.332973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.333988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.334999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.343 [2024-04-26 08:48:25.335753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.335800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.336956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.337969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.338984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.339024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.339068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.339112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.339164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.339215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.339263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.339789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.339840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.339891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.339946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.339993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.340988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.341958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.342002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.342038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.342083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.342130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.342172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.344 [2024-04-26 08:48:25.342218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.342263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.342307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.342353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.342404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.342455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.342492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.342536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.342576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.342620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.342666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.342714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.343970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.344970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.345965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.346975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.347980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.348025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.348071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.345 [2024-04-26 08:48:25.348112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.348998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.349759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.350955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.351962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.352983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.353024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.353070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.353114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.353161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.346 [2024-04-26 08:48:25.353751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.353802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.353855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.353908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.353959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.354986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.355960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.356803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.357979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.358956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.359962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.347 [2024-04-26 08:48:25.360008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.360041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.360087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.360133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.360173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.360216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.360258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.360310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.360853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.360901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.360943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.360982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.361953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.362955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.363898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.364484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.364547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.364600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.364650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.364701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.364754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.364803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.364858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.364904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.364954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.365961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.366008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.366048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.366091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.366134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.366177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.366218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.348 [2024-04-26 08:48:25.366264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.366983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.367034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.367089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.367136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.367185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.367232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.367284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.367337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.367382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.367438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.367498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.367994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.368992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.369989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.370947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.371477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.371526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.371575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.371629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.371680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.371732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.371782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.371830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.371876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.371921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.371971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.372972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.373015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.373062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.349 [2024-04-26 08:48:25.373107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.373955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.374009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.374055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.374106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.374159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.374215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.374259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.374304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.374359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.374409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.374907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.374952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.375966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.376970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.377919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.378503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.378558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.378608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.350 [2024-04-26 08:48:25.378654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.378707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.378760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.378808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.378854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.378901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.378954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.379969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.380975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.381994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.382986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.383984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.384947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.385004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.385499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.385550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.385597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.385643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.385682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.385722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.385765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.351 [2024-04-26 08:48:25.385812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.385859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.385904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.351 [2024-04-26 08:48:25.385947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.385979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.386960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.387982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.388982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.389956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.390982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.391861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.392997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.393042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.393081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.393130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.393173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.393217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.393266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.393315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.352 [2024-04-26 08:48:25.393360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.393990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.394958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.395004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.395058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.395098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.395143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.395185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.395225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.395268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.395310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.395842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.395900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.395947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.396959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.397997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.398825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.399984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.353 [2024-04-26 08:48:25.400570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.400614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.400653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.400700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.400745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.400786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.400829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.400875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.400913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.400958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.401958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.402999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.403995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.404972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.405776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.406956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.354 [2024-04-26 08:48:25.407827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.407871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.407914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.407960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.407993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.408983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.409036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.409084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.409142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.409193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.409242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.409299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.409816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.409872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.409921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.409974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.410974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.411961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.412787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.413954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.414988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.355 [2024-04-26 08:48:25.415709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.415760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.415807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.415857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.415911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.415965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.416014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.416066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.416113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.416168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.416216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.416727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.416776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.416825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.416875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.416929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.416977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.417994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.418971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.419670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.420960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.421981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.422982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.423031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 [2024-04-26 08:48:25.423078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.356 08:48:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.356 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.641 [2024-04-26 08:48:25.625907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.625962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.641 [2024-04-26 08:48:25.626578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.626633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.626680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.626724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.626766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.626815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.626870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.626916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.626968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.627972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.628786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.629980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.630990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.642 [2024-04-26 08:48:25.631775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.631822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.631876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.631922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.631973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.632017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.632065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.632124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.632174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.632222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.632268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.632311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.632355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.632564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.632924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.632977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.633954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.634994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.635680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.636980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.637033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.637080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.637133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.637181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.643 [2024-04-26 08:48:25.637228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.637996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.638997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.639996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.644 [2024-04-26 08:48:25.640320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.640964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.641972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.644 [2024-04-26 08:48:25.642015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.642061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.642104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.642150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.642205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.642254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.642307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.642358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.642410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.642931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.642981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.643963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.644985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.645895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.646988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.645 [2024-04-26 08:48:25.647687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.647734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.647779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.647825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.647871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.647903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.647949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.647989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.648986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.649037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.649079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.649119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.649156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.649201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.649245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.649289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.649807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.649859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.649907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.649956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.650977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.646 [2024-04-26 08:48:25.651857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.651904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.651953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.652807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 08:48:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:14:08.647 [2024-04-26 08:48:25.653350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 08:48:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:08.647 [2024-04-26 08:48:25.653741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.653992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.654954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.655957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.656990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.657038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.657091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.657141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.657201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.657248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.657295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.657341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.647 [2024-04-26 08:48:25.657391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.657982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.658963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.659920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.660957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.661973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.648 [2024-04-26 08:48:25.662550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.662608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.662654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.662699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.662731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.662773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.662816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.662869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.662912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.662952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.662996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.663042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.663083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.663123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.663167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.663212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.663730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.663789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.663846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.663899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.663950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.664984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.665962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.666953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.667998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.668046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.668101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.668155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.649 [2024-04-26 08:48:25.668215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.668956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.669986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.670032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.670065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.670104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.670152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.670200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.670729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.670784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.670832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.670878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.670924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.670965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.671946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.650 [2024-04-26 08:48:25.672610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.672650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.672691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.672743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.672789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.672836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.672875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.672914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.672952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.672996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.673875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.674991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.675990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.676966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.677008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.677050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.677099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.677628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.677680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.677728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.677784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.677828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.677875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.677923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.677966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.678012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.678054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.678086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.651 [2024-04-26 08:48:25.678131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.678969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.679966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.680839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.681992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.682984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.683030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.683082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.683127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.683176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.652 [2024-04-26 08:48:25.683223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.683969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.684019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.684068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.684639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.684685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.684723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.684772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.684823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.684870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.684915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.684969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.685963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.686966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.687690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.653 [2024-04-26 08:48:25.688782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.688833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.688887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.688935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.688977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.689965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.690971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.691017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.691573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.691619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.691661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.691705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.691748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.691789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.691836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.691886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.691932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.691977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.692964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.654 [2024-04-26 08:48:25.693898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.693943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.693994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.694668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.655 [2024-04-26 08:48:25.695447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.695997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.696988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.697956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.698471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.698519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.698558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.698605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.698650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.698692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.655 [2024-04-26 08:48:25.698742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.698778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.698829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.698881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.698933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.698982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.699964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.700968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.701980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.702996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.656 [2024-04-26 08:48:25.703894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.703937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.703983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.704856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.705988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.706980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.707970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.708012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.708056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.708101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.708151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.708194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.708237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.708279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.708322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.708511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.708939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.708994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.709053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.709100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.709149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.709196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.709246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.709291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.709343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.709392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.709441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.709491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.657 [2024-04-26 08:48:25.709538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.709584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.709636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.709688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.709737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.709789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.709842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.709891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.709939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.709987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.710994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.711824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.712963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.713968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.714011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.714065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.714112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.658 [2024-04-26 08:48:25.714160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.714959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.715972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.716955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.717957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.718788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.719344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.719393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.719443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.719489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.719543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.659 [2024-04-26 08:48:25.719594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.719641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.719694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.719746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.719797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.719846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.719893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.719944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.719999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.720991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.721986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.722041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.722096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.722140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.722192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.722247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.722299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.722359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.722404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.722615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.723987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.660 [2024-04-26 08:48:25.724029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.724972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.725852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.726989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.727986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.661 [2024-04-26 08:48:25.728730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.728776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.728819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.728867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.728917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.728968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.729020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.729063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.729111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.729162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.729212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.729260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.729316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.729521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.729886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.729945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.729997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.730959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.731997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.732724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.662 [2024-04-26 08:48:25.733996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.734979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.735964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.736019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.736072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.736124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.736175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.736371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.736741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.736796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.736844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.736891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.736943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.736988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.737969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.738993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.739024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.739076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.739122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.739162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.663 [2024-04-26 08:48:25.739206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.739247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.739297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.739346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.739399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.739447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.739505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.739553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.740996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.741998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.742981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.743025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.743223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.743576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.743627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.664 [2024-04-26 08:48:25.743677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.743730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.743785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.743838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.743882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.743934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.743982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.744994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.745969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.746012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.746060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.746108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.746153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.746199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.746244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.746293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.746339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.746394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.746448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.746989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.747996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.748993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.749048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.749101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.749154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.665 [2024-04-26 08:48:25.749205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.749934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.750106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.750548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.750597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.750652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.666 [2024-04-26 08:48:25.750707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.750754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.750806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.750859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.750908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.750954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.751967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.752962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.753494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.754976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.755022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.755060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.755105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.755153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.755198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.755240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.755286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.666 [2024-04-26 08:48:25.755328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.755980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.756999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.757048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.757094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.757299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.757656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.757704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.757745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.757792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.757840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.757883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.757936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.757978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.758946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.759989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.667 [2024-04-26 08:48:25.760038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.760089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.760134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.760185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.760239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.760286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.760334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.760379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.760430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.760484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.760531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.761968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.762981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.763999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.764049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.764242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.764612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.764659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.764710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.764766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.764811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.764858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.764912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.764956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.764995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.765967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.766018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.766061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.766110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.766157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.766208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.766257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.668 [2024-04-26 08:48:25.766307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.766987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.767028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.767078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.767129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.767180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.767228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.767286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.767335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.767384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.767435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.767497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.768977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.769964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.770949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.771139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.771520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.771573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.771623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.771676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.771727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.669 [2024-04-26 08:48:25.771775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.771831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.771882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.771932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.771978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.772961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.773973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.774019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.774071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.774121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.774172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.774230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.774278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.774323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.774373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.774428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.774480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.775968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.776957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.670 [2024-04-26 08:48:25.777576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.777622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.777664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.777713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.777760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.777811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.777856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.777901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.777934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.778126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.778517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.778565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.778611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.778661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.778713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.778761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.778807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.778860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.778923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.778976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.779997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.780974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.781018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.781063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.781108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.781157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.781203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.781252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.781301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.781344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.781377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.781422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.781474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.782994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.783980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.784027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.784078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.784131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.784184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.784235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.784282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.784337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.671 [2024-04-26 08:48:25.784388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.784997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.785044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.785247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.785608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.785665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.785717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.785758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.785805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.785851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.785895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.785938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.785985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.786961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.787958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.788953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.789983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.790034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.790085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.790142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.790201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.790252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.790301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.672 [2024-04-26 08:48:25.790352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.790998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.791962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.792170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.792564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.792616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.792670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.792725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.792784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.792831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.792883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.792935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.792996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.793998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.794993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.795048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.795098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.795149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.795197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.795247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.795301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.795347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.795397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.795447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.795506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.796964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.797010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.797051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.797096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.797135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.673 [2024-04-26 08:48:25.797186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.797981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.798983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.799971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.800967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.801985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.802026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.802069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.802116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.802168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.802205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.802254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.802303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.802355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.802891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.802947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.802996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.674 [2024-04-26 08:48:25.803880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.803931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.803974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.804984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.805862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.675 [2024-04-26 08:48:25.806378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.806985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.807954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.808989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.809033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.809074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.809127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.809176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.809220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.809261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.809309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.809352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.809879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.809930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.809973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.675 [2024-04-26 08:48:25.810904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.810947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.810990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.811996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.812903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.813966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.814993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.815968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.816013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.816059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.816121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.816167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.816208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.816254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.816296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.816340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.816880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.816940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.816982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.817024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.817071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.817111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.817149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.676 [2024-04-26 08:48:25.817195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.817969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.818973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.819864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.820972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.821973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.822973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.823017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.823066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.823131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.823179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.823226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.823289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.823337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.823931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.823979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.824026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.824068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.824113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.824158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.824194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.824235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.824279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.824322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.824372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.824416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.677 [2024-04-26 08:48:25.824463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.824507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.824554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.824596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.824644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.824694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.824746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.824797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 true 00:14:08.678 [2024-04-26 08:48:25.824848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.824897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.824949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.824998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.825973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.826875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.827973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.828972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.829994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.830041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.830093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.830149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.830198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.830249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.830301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.830350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.830400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.830921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.830969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.678 [2024-04-26 08:48:25.831815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.831866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.831920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.831971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.832968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.833914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.834988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.835956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.836989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.837034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.837081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.837125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.837167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.837215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.837256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.837304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.837831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.837880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.837928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.837983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.679 [2024-04-26 08:48:25.838569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.838608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.838651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.838694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.838739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.838789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.838837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.838890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.838937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.838985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.839994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.840997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.841983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.842967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.843970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.844021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.844075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.844134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.844188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.844239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.844293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.844811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.844858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.844903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.844946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.844987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.845998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.846049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.680 [2024-04-26 08:48:25.846094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.846979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.847867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.848967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 08:48:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:08.681 [2024-04-26 08:48:25.849320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 08:48:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.681 [2024-04-26 08:48:25.849696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.849984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.850978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.851028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.851078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.851138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.851187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.851690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.851734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.851779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.851821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.851873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.851918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.851965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.852949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.681 [2024-04-26 08:48:25.853558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.853612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.853654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.853701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.853754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.853799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.853851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.853901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.853950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.854840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.855978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.856958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.682 [2024-04-26 08:48:25.857637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.857691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.857740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.857787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.857836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.857890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.857941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.857987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.858976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.683 [2024-04-26 08:48:25.859397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.859961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.860995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.861808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.862145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.862190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.862228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.683 [2024-04-26 08:48:25.862275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.684 [2024-04-26 08:48:25.862849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.862901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.862955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.863990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.958 [2024-04-26 08:48:25.864601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.864655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.864706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.864756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.864802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.864849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.864902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.864951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.864999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.865510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.865557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.865607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.865650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.865703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.865741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.865781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.865832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.865871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.865911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.865952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.866995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.867993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.868044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.868092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.868140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.868190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.868242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.868290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.868339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.868390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.868436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.868631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.868982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.869980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.870031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.959 [2024-04-26 08:48:25.870082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.870987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.871819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.872967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.873998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.874955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.875004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.960 [2024-04-26 08:48:25.875062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.875112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.875159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.875206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.875257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.875308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.875514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.875892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.875945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.875988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.876952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.877999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.878807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.879976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.961 [2024-04-26 08:48:25.880756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.880804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.880851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.880900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.880946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.880991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.881954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.882005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.882065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.882113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.882164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.882209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.882260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.882312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.882516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.882865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.882917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.882969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.883989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.884963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.962 [2024-04-26 08:48:25.885586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.885629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.885676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.885719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.886990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.887955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.888985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.889029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.889075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.889123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.889164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.889207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.889253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.889432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.889866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.889909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.889953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.890003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.890050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.890095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.890142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.890195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.890248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.890296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.963 [2024-04-26 08:48:25.890350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.890988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.891966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.892843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.893961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.894973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.895024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.895075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.895121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.895163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.895204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.895245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.895299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.895345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.895397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.964 [2024-04-26 08:48:25.895440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.895497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.895551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.895601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.895651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.895705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.895765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.895813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.895860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.895909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.895968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.896966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.897997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.898984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.899875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.965 [2024-04-26 08:48:25.900964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.901995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.902998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.903048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.903100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.903160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.903212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.903264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.903315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.903362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.903411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.903940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.903993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.904968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.966 [2024-04-26 08:48:25.905672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.905716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.905764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.905815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.905863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.905912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.905966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.906839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.907976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.908958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.967 [2024-04-26 08:48:25.909825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.909872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.909916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.909963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.910009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.910055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.910099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.910136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.910186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.910233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.910276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.910329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.910864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.910917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.910966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.911969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.912995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.913803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.914319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.914370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.914422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.914483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.914534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.968 [2024-04-26 08:48:25.914581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.914632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.914681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.968 [2024-04-26 08:48:25.914732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.914786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.914832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.914879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.914931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.914980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.915997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.916976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.917024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.917071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.917122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.917168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.917218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.917265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.917316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.917847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.917900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.917940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.917985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.918996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.919052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.919099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.919143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.919195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.919245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.919290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.919341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.969 [2024-04-26 08:48:25.919392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.919969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.920807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.921970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.922995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.923998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.924052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.924102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.970 [2024-04-26 08:48:25.924151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.924207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.924253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.924310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.924797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.924841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.924881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.924928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.924970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.925997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.926960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.927735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.928948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.971 [2024-04-26 08:48:25.929710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.929757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.929802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.929855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.929906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.929953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.929998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.930977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.931020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.931059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.931108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.931166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.931214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.931740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.931794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.931841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.931890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.931940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.931991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.932968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.972 [2024-04-26 08:48:25.933935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.933978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.934679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.935960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.936993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.937993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.938043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.938094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.938143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.938652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.938703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.938749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.938803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.938843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.938888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.938937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.938987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.939041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.939092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.939143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.939193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.939239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.939293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.939343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.939398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.939457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.939511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.973 [2024-04-26 08:48:25.939559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.939605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.939654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.939703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.939749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.939797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.939848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.939892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.939941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.939990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.940985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.941642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.942998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.943968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.974 [2024-04-26 08:48:25.944704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.944768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.944813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.944867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.944915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.944970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.945023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.945070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.945122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.945175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.945217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.945735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.945785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.945832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.945886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.945932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.945966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.946966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.947995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.948637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.949988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.950037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.950091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.950139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.950187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.950238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.950298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.975 [2024-04-26 08:48:25.950344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.950972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.951994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.952046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.952096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.952146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.952661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.952713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.952764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.952809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.952861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.952910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.952957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.953977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.954998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.955052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.955102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.955157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.955209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.955258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.955308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.955353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.976 [2024-04-26 08:48:25.955401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.955446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.955501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.955553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.955607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.956994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.957964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.958992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.959032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.959565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.959622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.959674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.959723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.959775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.959827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.959876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.959925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.959978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.960024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.960074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.960129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.960177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.977 [2024-04-26 08:48:25.960228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.960998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.961999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.962560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.963981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.964961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.965008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.965053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.965112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.965159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.978 [2024-04-26 08:48:25.965210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.965980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.966977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.967974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.968961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.969559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.970059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.970112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.970159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.970204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:14:08.979 [2024-04-26 08:48:25.970247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.970292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.970333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.979 [2024-04-26 08:48:25.970382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.970975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.971988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.972989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.973966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.974986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.975035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.975084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.975136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.975186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.975239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.980 [2024-04-26 08:48:25.975288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.975995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.976564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.977985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.978976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.981 [2024-04-26 08:48:25.979997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.980050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.980106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.980624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.980678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.980722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.980769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.980809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.980855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.980897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.980938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.980987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.981964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.982997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.983046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.983093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.983143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.983192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.983242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.983293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.983348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.983404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.983469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.983518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.983567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.984121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.984174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.984223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.984275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.984327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.984373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.982 [2024-04-26 08:48:25.984423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.984483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.984534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.984593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.984644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.984689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.984737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.984787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.984835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.984883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.984931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.984980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.985968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.986954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.987978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.988964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.989005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.989047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.989084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.989138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.989192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.989246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.989293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.983 [2024-04-26 08:48:25.989342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.989973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.990495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.991969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.992974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.993969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.994012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.994061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.994578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.994632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.994686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.994740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.994791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.994837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.994888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.994938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.984 [2024-04-26 08:48:25.994992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.995978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.996979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.997027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.997079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.997143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.997194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.997249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.997296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.997348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.997397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.997448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.997507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.997554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.998983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:25.999958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:26.000005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:26.000054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:26.000131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.985 [2024-04-26 08:48:26.000177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.000982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.001027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.001068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.001637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.001687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.001741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.001787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.001837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.001887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.001938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.001983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.002965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.003954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.986 [2024-04-26 08:48:26.004573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.005995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.006987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.007990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.008041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.008532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.008584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.008624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.008664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.008709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.008754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.008793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.008846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.008902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.008955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.009982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.010031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.010081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.010130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.987 [2024-04-26 08:48:26.010182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.010973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.011504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.012989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.013987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.014990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.015038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 [2024-04-26 08:48:26.015083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:14:08.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.988 08:48:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:08.988 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.248 08:48:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:14:09.248 08:48:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:09.248 true 00:14:09.248 08:48:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:09.248 08:48:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.184 08:48:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.442 08:48:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:14:10.442 08:48:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:10.442 true 00:14:10.443 08:48:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:10.443 08:48:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.701 08:48:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.960 08:48:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:14:10.960 08:48:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:10.960 true 00:14:11.218 08:48:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:11.218 08:48:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.155 08:48:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.413 08:48:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:14:12.413 08:48:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:12.671 true 00:14:12.671 08:48:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:12.671 08:48:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.671 08:48:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.930 08:48:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:14:12.930 08:48:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:13.188 true 00:14:13.188 08:48:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:13.188 08:48:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.124 Initializing NVMe Controllers 00:14:14.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:14.124 Controller IO queue size 128, less than required. 00:14:14.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:14.124 Controller IO queue size 128, less than required. 00:14:14.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:14.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:14.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:14.124 Initialization complete. Launching workers. 00:14:14.124 ======================================================== 00:14:14.124 Latency(us) 00:14:14.124 Device Information : IOPS MiB/s Average min max 00:14:14.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2145.83 1.05 33989.96 1996.10 1146937.96 00:14:14.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13965.37 6.82 9143.72 2071.00 359754.30 00:14:14.124 ======================================================== 00:14:14.125 Total : 16111.20 7.87 12452.97 1996.10 1146937.96 00:14:14.125 00:14:14.384 08:48:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.384 08:48:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:14:14.384 08:48:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:14.643 true 00:14:14.644 08:48:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1987903 00:14:14.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1987903) - No such process 00:14:14.644 08:48:31 -- target/ns_hotplug_stress.sh@44 -- # wait 1987903 00:14:14.644 08:48:31 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:14.644 08:48:31 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:14:14.644 08:48:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:14.644 08:48:31 -- nvmf/common.sh@117 -- # sync 00:14:14.644 08:48:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:14.644 08:48:31 -- nvmf/common.sh@120 -- # set +e 00:14:14.644 08:48:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:14.644 08:48:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:14.644 rmmod nvme_tcp 00:14:14.644 rmmod nvme_fabrics 00:14:14.644 rmmod nvme_keyring 00:14:14.644 08:48:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:14.644 08:48:31 -- nvmf/common.sh@124 -- # set -e 00:14:14.644 08:48:31 -- nvmf/common.sh@125 -- # return 0 00:14:14.644 08:48:31 -- nvmf/common.sh@478 -- # '[' -n 1987309 ']' 00:14:14.644 08:48:31 -- nvmf/common.sh@479 -- # killprocess 1987309 00:14:14.644 08:48:31 -- common/autotest_common.sh@936 -- # '[' -z 1987309 ']' 00:14:14.644 08:48:31 -- common/autotest_common.sh@940 -- # kill -0 1987309 00:14:14.644 08:48:31 -- common/autotest_common.sh@941 -- # uname 00:14:14.644 08:48:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:14.644 08:48:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1987309 00:14:14.644 08:48:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:14.644 08:48:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:14.644 08:48:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1987309' 00:14:14.644 killing process with pid 1987309 00:14:14.644 08:48:31 -- common/autotest_common.sh@955 -- # kill 1987309 00:14:14.644 08:48:31 -- common/autotest_common.sh@960 -- # wait 1987309 00:14:14.908 08:48:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:14.908 08:48:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:14.908 08:48:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:14.908 08:48:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:14.908 08:48:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:14.908 08:48:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.908 08:48:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.908 08:48:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.445 08:48:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:17.445 00:14:17.445 real 0m43.264s 00:14:17.445 user 2m28.514s 00:14:17.445 sys 0m16.200s 00:14:17.445 08:48:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:17.445 08:48:34 -- common/autotest_common.sh@10 -- # set +x 00:14:17.445 ************************************ 00:14:17.445 END TEST nvmf_ns_hotplug_stress 00:14:17.445 ************************************ 00:14:17.445 08:48:34 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:17.445 08:48:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:17.445 08:48:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:17.445 08:48:34 -- common/autotest_common.sh@10 -- # set +x 00:14:17.445 ************************************ 00:14:17.445 START TEST nvmf_connect_stress 00:14:17.445 ************************************ 00:14:17.445 08:48:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:17.445 * Looking for test storage... 00:14:17.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.445 08:48:34 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.445 08:48:34 -- nvmf/common.sh@7 -- # uname -s 00:14:17.445 08:48:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.445 08:48:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.445 08:48:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.445 08:48:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.445 08:48:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.445 08:48:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.445 08:48:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.445 08:48:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.445 08:48:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.445 08:48:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.445 08:48:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:17.445 08:48:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:17.445 08:48:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.445 08:48:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.445 08:48:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.445 08:48:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.445 08:48:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.445 08:48:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.445 08:48:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.445 08:48:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.445 08:48:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.445 08:48:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.446 08:48:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.446 08:48:34 -- paths/export.sh@5 -- # export PATH 00:14:17.446 08:48:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.446 08:48:34 -- nvmf/common.sh@47 -- # : 0 00:14:17.446 08:48:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.446 08:48:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.446 08:48:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.446 08:48:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.446 08:48:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.446 08:48:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.446 08:48:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.446 08:48:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.446 08:48:34 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:17.446 08:48:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:17.446 08:48:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.446 08:48:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:17.446 08:48:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:17.446 08:48:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:17.446 08:48:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.446 08:48:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.446 08:48:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.446 08:48:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:17.446 08:48:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:17.446 08:48:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.446 08:48:34 -- common/autotest_common.sh@10 -- # set +x 00:14:25.585 08:48:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:25.585 08:48:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.585 08:48:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.585 08:48:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.585 08:48:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.585 08:48:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.585 08:48:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.586 08:48:41 -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.586 08:48:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.586 08:48:41 -- nvmf/common.sh@296 -- # e810=() 00:14:25.586 08:48:41 -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.586 08:48:41 -- nvmf/common.sh@297 -- # x722=() 00:14:25.586 08:48:41 -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.586 08:48:41 -- nvmf/common.sh@298 -- # mlx=() 00:14:25.586 08:48:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.586 08:48:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.586 08:48:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.586 08:48:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.586 08:48:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.586 08:48:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.586 08:48:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.586 08:48:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.586 08:48:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.586 08:48:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.586 08:48:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.586 08:48:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.586 08:48:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.586 08:48:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.586 08:48:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.586 08:48:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.586 08:48:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:25.586 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:25.586 08:48:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.586 08:48:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:25.586 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:25.586 08:48:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.586 08:48:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.586 08:48:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.586 08:48:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:25.586 08:48:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.586 08:48:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:25.586 Found net devices under 0000:af:00.0: cvl_0_0 00:14:25.586 08:48:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.586 08:48:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.586 08:48:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.586 08:48:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:25.586 08:48:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.586 08:48:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:25.586 Found net devices under 0000:af:00.1: cvl_0_1 00:14:25.586 08:48:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.586 08:48:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:25.586 08:48:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:25.586 08:48:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:25.586 08:48:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:25.586 08:48:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.586 08:48:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.586 08:48:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.586 08:48:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.586 08:48:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.586 08:48:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.586 08:48:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.586 08:48:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.586 08:48:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.586 08:48:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.586 08:48:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.586 08:48:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.586 08:48:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.586 08:48:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.586 08:48:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.586 08:48:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.586 08:48:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.586 08:48:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.587 08:48:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.587 08:48:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:25.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:14:25.587 00:14:25.587 --- 10.0.0.2 ping statistics --- 00:14:25.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.587 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:14:25.587 08:48:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:14:25.587 00:14:25.587 --- 10.0.0.1 ping statistics --- 00:14:25.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.587 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:14:25.587 08:48:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.587 08:48:41 -- nvmf/common.sh@411 -- # return 0 00:14:25.587 08:48:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:25.587 08:48:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.587 08:48:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:25.587 08:48:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:25.587 08:48:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.587 08:48:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:25.587 08:48:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:25.587 08:48:41 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:25.587 08:48:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:25.587 08:48:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:25.587 08:48:41 -- common/autotest_common.sh@10 -- # set +x 00:14:25.587 08:48:41 -- nvmf/common.sh@470 -- # nvmfpid=1997719 00:14:25.587 08:48:41 -- nvmf/common.sh@471 -- # waitforlisten 1997719 00:14:25.587 08:48:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:25.587 08:48:41 -- common/autotest_common.sh@817 -- # '[' -z 1997719 ']' 00:14:25.587 08:48:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.587 08:48:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:25.587 08:48:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.587 08:48:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:25.587 08:48:41 -- common/autotest_common.sh@10 -- # set +x 00:14:25.587 [2024-04-26 08:48:41.761756] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:14:25.587 [2024-04-26 08:48:41.761805] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.587 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.587 [2024-04-26 08:48:41.836039] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:25.587 [2024-04-26 08:48:41.902426] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.587 [2024-04-26 08:48:41.902474] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.587 [2024-04-26 08:48:41.902484] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.587 [2024-04-26 08:48:41.902513] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.587 [2024-04-26 08:48:41.902520] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.587 [2024-04-26 08:48:41.902627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.587 [2024-04-26 08:48:41.902720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.587 [2024-04-26 08:48:41.902721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.587 08:48:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:25.587 08:48:42 -- common/autotest_common.sh@850 -- # return 0 00:14:25.587 08:48:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:25.587 08:48:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:25.587 08:48:42 -- common/autotest_common.sh@10 -- # set +x 00:14:25.587 08:48:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.587 08:48:42 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.587 08:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.587 08:48:42 -- common/autotest_common.sh@10 -- # set +x 00:14:25.587 [2024-04-26 08:48:42.614667] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.587 08:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.587 08:48:42 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.587 08:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.587 08:48:42 -- common/autotest_common.sh@10 -- # set +x 00:14:25.587 08:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.587 08:48:42 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.587 08:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.587 08:48:42 -- common/autotest_common.sh@10 -- # set +x 00:14:25.587 [2024-04-26 08:48:42.652627] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.587 08:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.587 08:48:42 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:25.587 08:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.587 08:48:42 -- common/autotest_common.sh@10 -- # set +x 00:14:25.587 NULL1 00:14:25.587 08:48:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.587 08:48:42 -- target/connect_stress.sh@21 -- # PERF_PID=1997817 00:14:25.587 08:48:42 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.588 08:48:42 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:25.588 08:48:42 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:25.588 08:48:42 -- target/connect_stress.sh@28 -- # cat 00:14:25.588 08:48:42 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:25.588 08:48:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.588 08:48:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.588 08:48:42 -- common/autotest_common.sh@10 -- # set +x 00:14:26.162 08:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:26.162 08:48:43 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:26.162 08:48:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.162 08:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:26.162 08:48:43 -- common/autotest_common.sh@10 -- # set +x 00:14:26.422 08:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:26.422 08:48:43 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:26.422 08:48:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.422 08:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:26.422 08:48:43 -- common/autotest_common.sh@10 -- # set +x 00:14:26.681 08:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:26.681 08:48:43 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:26.681 08:48:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.681 08:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:26.681 08:48:43 -- common/autotest_common.sh@10 -- # set +x 00:14:26.942 08:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:26.942 08:48:44 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:26.942 08:48:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.942 08:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:26.942 08:48:44 -- common/autotest_common.sh@10 -- # set +x 00:14:27.201 08:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.201 08:48:44 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:27.201 08:48:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.202 08:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.202 08:48:44 -- common/autotest_common.sh@10 -- # set +x 00:14:27.772 08:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:27.772 08:48:44 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:27.772 08:48:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:27.772 08:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:27.772 08:48:44 -- common/autotest_common.sh@10 -- # set +x 00:14:28.032 08:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.032 08:48:45 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:28.032 08:48:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.032 08:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.032 08:48:45 -- common/autotest_common.sh@10 -- # set +x 00:14:28.291 08:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.291 08:48:45 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:28.291 08:48:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.291 08:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.291 08:48:45 -- common/autotest_common.sh@10 -- # set +x 00:14:28.551 08:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.551 08:48:45 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:28.551 08:48:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.551 08:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.551 08:48:45 -- common/autotest_common.sh@10 -- # set +x 00:14:28.811 08:48:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:28.811 08:48:45 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:28.811 08:48:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:28.811 08:48:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:28.811 08:48:46 -- common/autotest_common.sh@10 -- # set +x 00:14:29.382 08:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.382 08:48:46 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:29.382 08:48:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.382 08:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.382 08:48:46 -- common/autotest_common.sh@10 -- # set +x 00:14:29.641 08:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.641 08:48:46 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:29.641 08:48:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.641 08:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.641 08:48:46 -- common/autotest_common.sh@10 -- # set +x 00:14:29.902 08:48:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:29.902 08:48:46 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:29.902 08:48:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:29.902 08:48:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:29.902 08:48:46 -- common/autotest_common.sh@10 -- # set +x 00:14:30.162 08:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.162 08:48:47 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:30.162 08:48:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.162 08:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.162 08:48:47 -- common/autotest_common.sh@10 -- # set +x 00:14:30.421 08:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.422 08:48:47 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:30.422 08:48:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.422 08:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.422 08:48:47 -- common/autotest_common.sh@10 -- # set +x 00:14:30.992 08:48:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:30.992 08:48:47 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:30.992 08:48:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:30.992 08:48:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:30.992 08:48:47 -- common/autotest_common.sh@10 -- # set +x 00:14:31.252 08:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:31.252 08:48:48 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:31.252 08:48:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.252 08:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:31.252 08:48:48 -- common/autotest_common.sh@10 -- # set +x 00:14:31.512 08:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:31.512 08:48:48 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:31.512 08:48:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.512 08:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:31.512 08:48:48 -- common/autotest_common.sh@10 -- # set +x 00:14:31.774 08:48:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:31.774 08:48:48 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:31.774 08:48:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.774 08:48:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:31.774 08:48:48 -- common/autotest_common.sh@10 -- # set +x 00:14:32.040 08:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.040 08:48:49 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:32.040 08:48:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.040 08:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.040 08:48:49 -- common/autotest_common.sh@10 -- # set +x 00:14:32.609 08:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.609 08:48:49 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:32.609 08:48:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.609 08:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.609 08:48:49 -- common/autotest_common.sh@10 -- # set +x 00:14:32.868 08:48:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:32.868 08:48:49 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:32.868 08:48:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.868 08:48:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:32.868 08:48:49 -- common/autotest_common.sh@10 -- # set +x 00:14:33.128 08:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.128 08:48:50 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:33.128 08:48:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.128 08:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.128 08:48:50 -- common/autotest_common.sh@10 -- # set +x 00:14:33.387 08:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.387 08:48:50 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:33.387 08:48:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.387 08:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.387 08:48:50 -- common/autotest_common.sh@10 -- # set +x 00:14:33.647 08:48:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:33.647 08:48:50 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:33.647 08:48:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.647 08:48:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:33.647 08:48:50 -- common/autotest_common.sh@10 -- # set +x 00:14:34.214 08:48:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.214 08:48:51 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:34.214 08:48:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.214 08:48:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.214 08:48:51 -- common/autotest_common.sh@10 -- # set +x 00:14:34.474 08:48:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.474 08:48:51 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:34.474 08:48:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.474 08:48:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.474 08:48:51 -- common/autotest_common.sh@10 -- # set +x 00:14:34.732 08:48:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.732 08:48:51 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:34.732 08:48:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.732 08:48:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.732 08:48:51 -- common/autotest_common.sh@10 -- # set +x 00:14:34.992 08:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.992 08:48:52 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:34.992 08:48:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.992 08:48:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.992 08:48:52 -- common/autotest_common.sh@10 -- # set +x 00:14:35.251 08:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.251 08:48:52 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:35.251 08:48:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.251 08:48:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:35.251 08:48:52 -- common/autotest_common.sh@10 -- # set +x 00:14:35.820 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:35.820 08:48:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:35.820 08:48:52 -- target/connect_stress.sh@34 -- # kill -0 1997817 00:14:35.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1997817) - No such process 00:14:35.820 08:48:52 -- target/connect_stress.sh@38 -- # wait 1997817 00:14:35.820 08:48:52 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:35.820 08:48:52 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:35.820 08:48:52 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:35.820 08:48:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:35.820 08:48:52 -- nvmf/common.sh@117 -- # sync 00:14:35.820 08:48:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:35.820 08:48:52 -- nvmf/common.sh@120 -- # set +e 00:14:35.820 08:48:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:35.820 08:48:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:35.820 rmmod nvme_tcp 00:14:35.820 rmmod nvme_fabrics 00:14:35.820 rmmod nvme_keyring 00:14:35.820 08:48:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:35.820 08:48:52 -- nvmf/common.sh@124 -- # set -e 00:14:35.820 08:48:52 -- nvmf/common.sh@125 -- # return 0 00:14:35.820 08:48:52 -- nvmf/common.sh@478 -- # '[' -n 1997719 ']' 00:14:35.820 08:48:52 -- nvmf/common.sh@479 -- # killprocess 1997719 00:14:35.820 08:48:52 -- common/autotest_common.sh@936 -- # '[' -z 1997719 ']' 00:14:35.820 08:48:52 -- common/autotest_common.sh@940 -- # kill -0 1997719 00:14:35.820 08:48:52 -- common/autotest_common.sh@941 -- # uname 00:14:35.820 08:48:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.820 08:48:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1997719 00:14:35.820 08:48:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:35.820 08:48:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:35.820 08:48:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1997719' 00:14:35.820 killing process with pid 1997719 00:14:35.820 08:48:52 -- common/autotest_common.sh@955 -- # kill 1997719 00:14:35.820 08:48:52 -- common/autotest_common.sh@960 -- # wait 1997719 00:14:36.079 08:48:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:36.079 08:48:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:36.079 08:48:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:36.079 08:48:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.079 08:48:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.079 08:48:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.079 08:48:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.079 08:48:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.985 08:48:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:37.985 00:14:37.985 real 0m20.848s 00:14:37.985 user 0m40.360s 00:14:37.985 sys 0m10.533s 00:14:37.985 08:48:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:37.985 08:48:55 -- common/autotest_common.sh@10 -- # set +x 00:14:37.985 ************************************ 00:14:37.985 END TEST nvmf_connect_stress 00:14:37.985 ************************************ 00:14:38.244 08:48:55 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:38.245 08:48:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:38.245 08:48:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.245 08:48:55 -- common/autotest_common.sh@10 -- # set +x 00:14:38.245 ************************************ 00:14:38.245 START TEST nvmf_fused_ordering 00:14:38.245 ************************************ 00:14:38.245 08:48:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:38.504 * Looking for test storage... 00:14:38.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.504 08:48:55 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.504 08:48:55 -- nvmf/common.sh@7 -- # uname -s 00:14:38.504 08:48:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.504 08:48:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.504 08:48:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.504 08:48:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.504 08:48:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.504 08:48:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.504 08:48:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.504 08:48:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.504 08:48:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.504 08:48:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.504 08:48:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:38.504 08:48:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:38.504 08:48:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.504 08:48:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.504 08:48:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.504 08:48:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.504 08:48:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.504 08:48:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.504 08:48:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.504 08:48:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.504 08:48:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.504 08:48:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.504 08:48:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.505 08:48:55 -- paths/export.sh@5 -- # export PATH 00:14:38.505 08:48:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.505 08:48:55 -- nvmf/common.sh@47 -- # : 0 00:14:38.505 08:48:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.505 08:48:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.505 08:48:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.505 08:48:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.505 08:48:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.505 08:48:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.505 08:48:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.505 08:48:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.505 08:48:55 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:38.505 08:48:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:38.505 08:48:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.505 08:48:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:38.505 08:48:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:38.505 08:48:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:38.505 08:48:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.505 08:48:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.505 08:48:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.505 08:48:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:38.505 08:48:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:38.505 08:48:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.505 08:48:55 -- common/autotest_common.sh@10 -- # set +x 00:14:45.075 08:49:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:45.075 08:49:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:45.075 08:49:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:45.075 08:49:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:45.075 08:49:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:45.075 08:49:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:45.075 08:49:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:45.075 08:49:02 -- nvmf/common.sh@295 -- # net_devs=() 00:14:45.075 08:49:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:45.075 08:49:02 -- nvmf/common.sh@296 -- # e810=() 00:14:45.075 08:49:02 -- nvmf/common.sh@296 -- # local -ga e810 00:14:45.075 08:49:02 -- nvmf/common.sh@297 -- # x722=() 00:14:45.075 08:49:02 -- nvmf/common.sh@297 -- # local -ga x722 00:14:45.075 08:49:02 -- nvmf/common.sh@298 -- # mlx=() 00:14:45.075 08:49:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:45.075 08:49:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.075 08:49:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.075 08:49:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.075 08:49:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.075 08:49:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.076 08:49:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.076 08:49:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.076 08:49:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.076 08:49:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.076 08:49:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.076 08:49:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.076 08:49:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:45.076 08:49:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:45.076 08:49:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:45.076 08:49:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.076 08:49:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:45.076 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:45.076 08:49:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:45.076 08:49:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:45.076 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:45.076 08:49:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:45.076 08:49:02 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.076 08:49:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.076 08:49:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:45.076 08:49:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.076 08:49:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:45.076 Found net devices under 0000:af:00.0: cvl_0_0 00:14:45.076 08:49:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.076 08:49:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:45.076 08:49:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.076 08:49:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:45.076 08:49:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.076 08:49:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:45.076 Found net devices under 0000:af:00.1: cvl_0_1 00:14:45.076 08:49:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.076 08:49:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:45.076 08:49:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:45.076 08:49:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:45.076 08:49:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:45.076 08:49:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.076 08:49:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.076 08:49:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.076 08:49:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:45.076 08:49:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.076 08:49:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.076 08:49:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:45.076 08:49:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.076 08:49:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.076 08:49:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:45.076 08:49:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:45.076 08:49:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.076 08:49:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.338 08:49:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.338 08:49:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.338 08:49:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:45.338 08:49:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.338 08:49:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.338 08:49:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.596 08:49:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:45.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:14:45.596 00:14:45.596 --- 10.0.0.2 ping statistics --- 00:14:45.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.596 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:14:45.596 08:49:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:14:45.596 00:14:45.596 --- 10.0.0.1 ping statistics --- 00:14:45.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.596 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:14:45.596 08:49:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.596 08:49:02 -- nvmf/common.sh@411 -- # return 0 00:14:45.596 08:49:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:45.596 08:49:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.596 08:49:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:45.596 08:49:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:45.596 08:49:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.596 08:49:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:45.596 08:49:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:45.596 08:49:02 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:45.596 08:49:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:45.596 08:49:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:45.596 08:49:02 -- common/autotest_common.sh@10 -- # set +x 00:14:45.596 08:49:02 -- nvmf/common.sh@470 -- # nvmfpid=2003399 00:14:45.596 08:49:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:45.596 08:49:02 -- nvmf/common.sh@471 -- # waitforlisten 2003399 00:14:45.596 08:49:02 -- common/autotest_common.sh@817 -- # '[' -z 2003399 ']' 00:14:45.596 08:49:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.596 08:49:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:45.596 08:49:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.596 08:49:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:45.596 08:49:02 -- common/autotest_common.sh@10 -- # set +x 00:14:45.596 [2024-04-26 08:49:02.695547] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:14:45.596 [2024-04-26 08:49:02.695596] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.596 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.596 [2024-04-26 08:49:02.769095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.596 [2024-04-26 08:49:02.840112] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.596 [2024-04-26 08:49:02.840150] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.596 [2024-04-26 08:49:02.840160] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.596 [2024-04-26 08:49:02.840168] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.597 [2024-04-26 08:49:02.840176] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.597 [2024-04-26 08:49:02.840204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.535 08:49:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:46.535 08:49:03 -- common/autotest_common.sh@850 -- # return 0 00:14:46.535 08:49:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:46.535 08:49:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:46.535 08:49:03 -- common/autotest_common.sh@10 -- # set +x 00:14:46.535 08:49:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.535 08:49:03 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.535 08:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.535 08:49:03 -- common/autotest_common.sh@10 -- # set +x 00:14:46.535 [2024-04-26 08:49:03.543099] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.535 08:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.535 08:49:03 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:46.535 08:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.535 08:49:03 -- common/autotest_common.sh@10 -- # set +x 00:14:46.535 08:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.535 08:49:03 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.535 08:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.535 08:49:03 -- common/autotest_common.sh@10 -- # set +x 00:14:46.535 [2024-04-26 08:49:03.559272] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.535 08:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.535 08:49:03 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:46.535 08:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.535 08:49:03 -- common/autotest_common.sh@10 -- # set +x 00:14:46.535 NULL1 00:14:46.535 08:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.535 08:49:03 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:46.535 08:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.535 08:49:03 -- common/autotest_common.sh@10 -- # set +x 00:14:46.535 08:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.535 08:49:03 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:46.535 08:49:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:46.535 08:49:03 -- common/autotest_common.sh@10 -- # set +x 00:14:46.535 08:49:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:46.535 08:49:03 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:46.535 [2024-04-26 08:49:03.616367] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:14:46.535 [2024-04-26 08:49:03.616404] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003567 ] 00:14:46.535 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.477 Attached to nqn.2016-06.io.spdk:cnode1 00:14:47.477 Namespace ID: 1 size: 1GB 00:14:47.477 fused_ordering(0) 00:14:47.477 fused_ordering(1) 00:14:47.477 fused_ordering(2) 00:14:47.477 fused_ordering(3) 00:14:47.477 fused_ordering(4) 00:14:47.477 fused_ordering(5) 00:14:47.477 fused_ordering(6) 00:14:47.477 fused_ordering(7) 00:14:47.477 fused_ordering(8) 00:14:47.477 fused_ordering(9) 00:14:47.477 fused_ordering(10) 00:14:47.477 fused_ordering(11) 00:14:47.477 fused_ordering(12) 00:14:47.477 fused_ordering(13) 00:14:47.477 fused_ordering(14) 00:14:47.478 fused_ordering(15) 00:14:47.478 fused_ordering(16) 00:14:47.478 fused_ordering(17) 00:14:47.478 fused_ordering(18) 00:14:47.478 fused_ordering(19) 00:14:47.478 fused_ordering(20) 00:14:47.478 fused_ordering(21) 00:14:47.478 fused_ordering(22) 00:14:47.478 fused_ordering(23) 00:14:47.478 fused_ordering(24) 00:14:47.478 fused_ordering(25) 00:14:47.478 fused_ordering(26) 00:14:47.478 fused_ordering(27) 00:14:47.478 fused_ordering(28) 00:14:47.478 fused_ordering(29) 00:14:47.478 fused_ordering(30) 00:14:47.478 fused_ordering(31) 00:14:47.478 fused_ordering(32) 00:14:47.478 fused_ordering(33) 00:14:47.478 fused_ordering(34) 00:14:47.478 fused_ordering(35) 00:14:47.478 fused_ordering(36) 00:14:47.478 fused_ordering(37) 00:14:47.478 fused_ordering(38) 00:14:47.478 fused_ordering(39) 00:14:47.478 fused_ordering(40) 00:14:47.478 fused_ordering(41) 00:14:47.478 fused_ordering(42) 00:14:47.478 fused_ordering(43) 00:14:47.478 fused_ordering(44) 00:14:47.478 fused_ordering(45) 00:14:47.478 fused_ordering(46) 00:14:47.478 fused_ordering(47) 00:14:47.478 fused_ordering(48) 00:14:47.478 fused_ordering(49) 00:14:47.478 fused_ordering(50) 00:14:47.478 fused_ordering(51) 00:14:47.478 fused_ordering(52) 00:14:47.478 fused_ordering(53) 00:14:47.478 fused_ordering(54) 00:14:47.478 fused_ordering(55) 00:14:47.478 fused_ordering(56) 00:14:47.478 fused_ordering(57) 00:14:47.478 fused_ordering(58) 00:14:47.478 fused_ordering(59) 00:14:47.478 fused_ordering(60) 00:14:47.478 fused_ordering(61) 00:14:47.478 fused_ordering(62) 00:14:47.478 fused_ordering(63) 00:14:47.478 fused_ordering(64) 00:14:47.478 fused_ordering(65) 00:14:47.478 fused_ordering(66) 00:14:47.478 fused_ordering(67) 00:14:47.478 fused_ordering(68) 00:14:47.478 fused_ordering(69) 00:14:47.478 fused_ordering(70) 00:14:47.478 fused_ordering(71) 00:14:47.478 fused_ordering(72) 00:14:47.478 fused_ordering(73) 00:14:47.478 fused_ordering(74) 00:14:47.478 fused_ordering(75) 00:14:47.478 fused_ordering(76) 00:14:47.478 fused_ordering(77) 00:14:47.478 fused_ordering(78) 00:14:47.478 fused_ordering(79) 00:14:47.478 fused_ordering(80) 00:14:47.478 fused_ordering(81) 00:14:47.478 fused_ordering(82) 00:14:47.478 fused_ordering(83) 00:14:47.478 fused_ordering(84) 00:14:47.478 fused_ordering(85) 00:14:47.478 fused_ordering(86) 00:14:47.478 fused_ordering(87) 00:14:47.478 fused_ordering(88) 00:14:47.478 fused_ordering(89) 00:14:47.478 fused_ordering(90) 00:14:47.478 fused_ordering(91) 00:14:47.478 fused_ordering(92) 00:14:47.478 fused_ordering(93) 00:14:47.478 fused_ordering(94) 00:14:47.478 fused_ordering(95) 00:14:47.478 fused_ordering(96) 00:14:47.478 fused_ordering(97) 00:14:47.478 fused_ordering(98) 00:14:47.478 fused_ordering(99) 00:14:47.478 fused_ordering(100) 00:14:47.478 fused_ordering(101) 00:14:47.478 fused_ordering(102) 00:14:47.478 fused_ordering(103) 00:14:47.478 fused_ordering(104) 00:14:47.478 fused_ordering(105) 00:14:47.478 fused_ordering(106) 00:14:47.478 fused_ordering(107) 00:14:47.478 fused_ordering(108) 00:14:47.478 fused_ordering(109) 00:14:47.478 fused_ordering(110) 00:14:47.478 fused_ordering(111) 00:14:47.478 fused_ordering(112) 00:14:47.478 fused_ordering(113) 00:14:47.478 fused_ordering(114) 00:14:47.478 fused_ordering(115) 00:14:47.478 fused_ordering(116) 00:14:47.478 fused_ordering(117) 00:14:47.478 fused_ordering(118) 00:14:47.478 fused_ordering(119) 00:14:47.478 fused_ordering(120) 00:14:47.478 fused_ordering(121) 00:14:47.478 fused_ordering(122) 00:14:47.478 fused_ordering(123) 00:14:47.478 fused_ordering(124) 00:14:47.478 fused_ordering(125) 00:14:47.478 fused_ordering(126) 00:14:47.478 fused_ordering(127) 00:14:47.478 fused_ordering(128) 00:14:47.478 fused_ordering(129) 00:14:47.478 fused_ordering(130) 00:14:47.478 fused_ordering(131) 00:14:47.478 fused_ordering(132) 00:14:47.478 fused_ordering(133) 00:14:47.478 fused_ordering(134) 00:14:47.478 fused_ordering(135) 00:14:47.478 fused_ordering(136) 00:14:47.478 fused_ordering(137) 00:14:47.478 fused_ordering(138) 00:14:47.478 fused_ordering(139) 00:14:47.478 fused_ordering(140) 00:14:47.478 fused_ordering(141) 00:14:47.478 fused_ordering(142) 00:14:47.478 fused_ordering(143) 00:14:47.478 fused_ordering(144) 00:14:47.478 fused_ordering(145) 00:14:47.478 fused_ordering(146) 00:14:47.478 fused_ordering(147) 00:14:47.478 fused_ordering(148) 00:14:47.478 fused_ordering(149) 00:14:47.478 fused_ordering(150) 00:14:47.478 fused_ordering(151) 00:14:47.478 fused_ordering(152) 00:14:47.478 fused_ordering(153) 00:14:47.478 fused_ordering(154) 00:14:47.478 fused_ordering(155) 00:14:47.478 fused_ordering(156) 00:14:47.478 fused_ordering(157) 00:14:47.478 fused_ordering(158) 00:14:47.478 fused_ordering(159) 00:14:47.478 fused_ordering(160) 00:14:47.478 fused_ordering(161) 00:14:47.478 fused_ordering(162) 00:14:47.478 fused_ordering(163) 00:14:47.478 fused_ordering(164) 00:14:47.478 fused_ordering(165) 00:14:47.478 fused_ordering(166) 00:14:47.478 fused_ordering(167) 00:14:47.478 fused_ordering(168) 00:14:47.478 fused_ordering(169) 00:14:47.478 fused_ordering(170) 00:14:47.478 fused_ordering(171) 00:14:47.478 fused_ordering(172) 00:14:47.478 fused_ordering(173) 00:14:47.478 fused_ordering(174) 00:14:47.478 fused_ordering(175) 00:14:47.478 fused_ordering(176) 00:14:47.478 fused_ordering(177) 00:14:47.478 fused_ordering(178) 00:14:47.478 fused_ordering(179) 00:14:47.478 fused_ordering(180) 00:14:47.478 fused_ordering(181) 00:14:47.478 fused_ordering(182) 00:14:47.478 fused_ordering(183) 00:14:47.478 fused_ordering(184) 00:14:47.478 fused_ordering(185) 00:14:47.478 fused_ordering(186) 00:14:47.478 fused_ordering(187) 00:14:47.478 fused_ordering(188) 00:14:47.478 fused_ordering(189) 00:14:47.478 fused_ordering(190) 00:14:47.478 fused_ordering(191) 00:14:47.478 fused_ordering(192) 00:14:47.478 fused_ordering(193) 00:14:47.478 fused_ordering(194) 00:14:47.478 fused_ordering(195) 00:14:47.478 fused_ordering(196) 00:14:47.478 fused_ordering(197) 00:14:47.478 fused_ordering(198) 00:14:47.478 fused_ordering(199) 00:14:47.478 fused_ordering(200) 00:14:47.478 fused_ordering(201) 00:14:47.478 fused_ordering(202) 00:14:47.478 fused_ordering(203) 00:14:47.478 fused_ordering(204) 00:14:47.478 fused_ordering(205) 00:14:48.417 fused_ordering(206) 00:14:48.417 fused_ordering(207) 00:14:48.417 fused_ordering(208) 00:14:48.417 fused_ordering(209) 00:14:48.417 fused_ordering(210) 00:14:48.417 fused_ordering(211) 00:14:48.417 fused_ordering(212) 00:14:48.417 fused_ordering(213) 00:14:48.417 fused_ordering(214) 00:14:48.417 fused_ordering(215) 00:14:48.417 fused_ordering(216) 00:14:48.417 fused_ordering(217) 00:14:48.417 fused_ordering(218) 00:14:48.417 fused_ordering(219) 00:14:48.417 fused_ordering(220) 00:14:48.417 fused_ordering(221) 00:14:48.417 fused_ordering(222) 00:14:48.417 fused_ordering(223) 00:14:48.417 fused_ordering(224) 00:14:48.417 fused_ordering(225) 00:14:48.417 fused_ordering(226) 00:14:48.417 fused_ordering(227) 00:14:48.417 fused_ordering(228) 00:14:48.417 fused_ordering(229) 00:14:48.417 fused_ordering(230) 00:14:48.417 fused_ordering(231) 00:14:48.417 fused_ordering(232) 00:14:48.417 fused_ordering(233) 00:14:48.417 fused_ordering(234) 00:14:48.417 fused_ordering(235) 00:14:48.417 fused_ordering(236) 00:14:48.417 fused_ordering(237) 00:14:48.417 fused_ordering(238) 00:14:48.417 fused_ordering(239) 00:14:48.417 fused_ordering(240) 00:14:48.417 fused_ordering(241) 00:14:48.417 fused_ordering(242) 00:14:48.417 fused_ordering(243) 00:14:48.417 fused_ordering(244) 00:14:48.417 fused_ordering(245) 00:14:48.417 fused_ordering(246) 00:14:48.417 fused_ordering(247) 00:14:48.417 fused_ordering(248) 00:14:48.417 fused_ordering(249) 00:14:48.417 fused_ordering(250) 00:14:48.417 fused_ordering(251) 00:14:48.417 fused_ordering(252) 00:14:48.417 fused_ordering(253) 00:14:48.417 fused_ordering(254) 00:14:48.417 fused_ordering(255) 00:14:48.417 fused_ordering(256) 00:14:48.417 fused_ordering(257) 00:14:48.417 fused_ordering(258) 00:14:48.417 fused_ordering(259) 00:14:48.417 fused_ordering(260) 00:14:48.417 fused_ordering(261) 00:14:48.417 fused_ordering(262) 00:14:48.417 fused_ordering(263) 00:14:48.417 fused_ordering(264) 00:14:48.417 fused_ordering(265) 00:14:48.417 fused_ordering(266) 00:14:48.417 fused_ordering(267) 00:14:48.417 fused_ordering(268) 00:14:48.417 fused_ordering(269) 00:14:48.417 fused_ordering(270) 00:14:48.417 fused_ordering(271) 00:14:48.417 fused_ordering(272) 00:14:48.417 fused_ordering(273) 00:14:48.417 fused_ordering(274) 00:14:48.417 fused_ordering(275) 00:14:48.417 fused_ordering(276) 00:14:48.417 fused_ordering(277) 00:14:48.417 fused_ordering(278) 00:14:48.417 fused_ordering(279) 00:14:48.417 fused_ordering(280) 00:14:48.417 fused_ordering(281) 00:14:48.417 fused_ordering(282) 00:14:48.417 fused_ordering(283) 00:14:48.417 fused_ordering(284) 00:14:48.417 fused_ordering(285) 00:14:48.417 fused_ordering(286) 00:14:48.417 fused_ordering(287) 00:14:48.417 fused_ordering(288) 00:14:48.417 fused_ordering(289) 00:14:48.417 fused_ordering(290) 00:14:48.417 fused_ordering(291) 00:14:48.417 fused_ordering(292) 00:14:48.417 fused_ordering(293) 00:14:48.417 fused_ordering(294) 00:14:48.417 fused_ordering(295) 00:14:48.417 fused_ordering(296) 00:14:48.417 fused_ordering(297) 00:14:48.417 fused_ordering(298) 00:14:48.417 fused_ordering(299) 00:14:48.417 fused_ordering(300) 00:14:48.417 fused_ordering(301) 00:14:48.417 fused_ordering(302) 00:14:48.417 fused_ordering(303) 00:14:48.417 fused_ordering(304) 00:14:48.417 fused_ordering(305) 00:14:48.417 fused_ordering(306) 00:14:48.417 fused_ordering(307) 00:14:48.417 fused_ordering(308) 00:14:48.417 fused_ordering(309) 00:14:48.417 fused_ordering(310) 00:14:48.418 fused_ordering(311) 00:14:48.418 fused_ordering(312) 00:14:48.418 fused_ordering(313) 00:14:48.418 fused_ordering(314) 00:14:48.418 fused_ordering(315) 00:14:48.418 fused_ordering(316) 00:14:48.418 fused_ordering(317) 00:14:48.418 fused_ordering(318) 00:14:48.418 fused_ordering(319) 00:14:48.418 fused_ordering(320) 00:14:48.418 fused_ordering(321) 00:14:48.418 fused_ordering(322) 00:14:48.418 fused_ordering(323) 00:14:48.418 fused_ordering(324) 00:14:48.418 fused_ordering(325) 00:14:48.418 fused_ordering(326) 00:14:48.418 fused_ordering(327) 00:14:48.418 fused_ordering(328) 00:14:48.418 fused_ordering(329) 00:14:48.418 fused_ordering(330) 00:14:48.418 fused_ordering(331) 00:14:48.418 fused_ordering(332) 00:14:48.418 fused_ordering(333) 00:14:48.418 fused_ordering(334) 00:14:48.418 fused_ordering(335) 00:14:48.418 fused_ordering(336) 00:14:48.418 fused_ordering(337) 00:14:48.418 fused_ordering(338) 00:14:48.418 fused_ordering(339) 00:14:48.418 fused_ordering(340) 00:14:48.418 fused_ordering(341) 00:14:48.418 fused_ordering(342) 00:14:48.418 fused_ordering(343) 00:14:48.418 fused_ordering(344) 00:14:48.418 fused_ordering(345) 00:14:48.418 fused_ordering(346) 00:14:48.418 fused_ordering(347) 00:14:48.418 fused_ordering(348) 00:14:48.418 fused_ordering(349) 00:14:48.418 fused_ordering(350) 00:14:48.418 fused_ordering(351) 00:14:48.418 fused_ordering(352) 00:14:48.418 fused_ordering(353) 00:14:48.418 fused_ordering(354) 00:14:48.418 fused_ordering(355) 00:14:48.418 fused_ordering(356) 00:14:48.418 fused_ordering(357) 00:14:48.418 fused_ordering(358) 00:14:48.418 fused_ordering(359) 00:14:48.418 fused_ordering(360) 00:14:48.418 fused_ordering(361) 00:14:48.418 fused_ordering(362) 00:14:48.418 fused_ordering(363) 00:14:48.418 fused_ordering(364) 00:14:48.418 fused_ordering(365) 00:14:48.418 fused_ordering(366) 00:14:48.418 fused_ordering(367) 00:14:48.418 fused_ordering(368) 00:14:48.418 fused_ordering(369) 00:14:48.418 fused_ordering(370) 00:14:48.418 fused_ordering(371) 00:14:48.418 fused_ordering(372) 00:14:48.418 fused_ordering(373) 00:14:48.418 fused_ordering(374) 00:14:48.418 fused_ordering(375) 00:14:48.418 fused_ordering(376) 00:14:48.418 fused_ordering(377) 00:14:48.418 fused_ordering(378) 00:14:48.418 fused_ordering(379) 00:14:48.418 fused_ordering(380) 00:14:48.418 fused_ordering(381) 00:14:48.418 fused_ordering(382) 00:14:48.418 fused_ordering(383) 00:14:48.418 fused_ordering(384) 00:14:48.418 fused_ordering(385) 00:14:48.418 fused_ordering(386) 00:14:48.418 fused_ordering(387) 00:14:48.418 fused_ordering(388) 00:14:48.418 fused_ordering(389) 00:14:48.418 fused_ordering(390) 00:14:48.418 fused_ordering(391) 00:14:48.418 fused_ordering(392) 00:14:48.418 fused_ordering(393) 00:14:48.418 fused_ordering(394) 00:14:48.418 fused_ordering(395) 00:14:48.418 fused_ordering(396) 00:14:48.418 fused_ordering(397) 00:14:48.418 fused_ordering(398) 00:14:48.418 fused_ordering(399) 00:14:48.418 fused_ordering(400) 00:14:48.418 fused_ordering(401) 00:14:48.418 fused_ordering(402) 00:14:48.418 fused_ordering(403) 00:14:48.418 fused_ordering(404) 00:14:48.418 fused_ordering(405) 00:14:48.418 fused_ordering(406) 00:14:48.418 fused_ordering(407) 00:14:48.418 fused_ordering(408) 00:14:48.418 fused_ordering(409) 00:14:48.418 fused_ordering(410) 00:14:48.988 fused_ordering(411) 00:14:48.988 fused_ordering(412) 00:14:48.988 fused_ordering(413) 00:14:48.988 fused_ordering(414) 00:14:48.988 fused_ordering(415) 00:14:48.988 fused_ordering(416) 00:14:48.988 fused_ordering(417) 00:14:48.988 fused_ordering(418) 00:14:48.988 fused_ordering(419) 00:14:48.988 fused_ordering(420) 00:14:48.988 fused_ordering(421) 00:14:48.988 fused_ordering(422) 00:14:48.988 fused_ordering(423) 00:14:48.988 fused_ordering(424) 00:14:48.988 fused_ordering(425) 00:14:48.988 fused_ordering(426) 00:14:48.988 fused_ordering(427) 00:14:48.988 fused_ordering(428) 00:14:48.988 fused_ordering(429) 00:14:48.988 fused_ordering(430) 00:14:48.988 fused_ordering(431) 00:14:48.988 fused_ordering(432) 00:14:48.988 fused_ordering(433) 00:14:48.988 fused_ordering(434) 00:14:48.988 fused_ordering(435) 00:14:48.988 fused_ordering(436) 00:14:48.988 fused_ordering(437) 00:14:48.988 fused_ordering(438) 00:14:48.988 fused_ordering(439) 00:14:48.988 fused_ordering(440) 00:14:48.988 fused_ordering(441) 00:14:48.988 fused_ordering(442) 00:14:48.988 fused_ordering(443) 00:14:48.988 fused_ordering(444) 00:14:48.988 fused_ordering(445) 00:14:48.988 fused_ordering(446) 00:14:48.988 fused_ordering(447) 00:14:48.988 fused_ordering(448) 00:14:48.988 fused_ordering(449) 00:14:48.988 fused_ordering(450) 00:14:48.988 fused_ordering(451) 00:14:48.988 fused_ordering(452) 00:14:48.988 fused_ordering(453) 00:14:48.988 fused_ordering(454) 00:14:48.988 fused_ordering(455) 00:14:48.988 fused_ordering(456) 00:14:48.988 fused_ordering(457) 00:14:48.988 fused_ordering(458) 00:14:48.988 fused_ordering(459) 00:14:48.988 fused_ordering(460) 00:14:48.988 fused_ordering(461) 00:14:48.988 fused_ordering(462) 00:14:48.988 fused_ordering(463) 00:14:48.988 fused_ordering(464) 00:14:48.988 fused_ordering(465) 00:14:48.988 fused_ordering(466) 00:14:48.988 fused_ordering(467) 00:14:48.988 fused_ordering(468) 00:14:48.988 fused_ordering(469) 00:14:48.988 fused_ordering(470) 00:14:48.988 fused_ordering(471) 00:14:48.988 fused_ordering(472) 00:14:48.988 fused_ordering(473) 00:14:48.988 fused_ordering(474) 00:14:48.988 fused_ordering(475) 00:14:48.988 fused_ordering(476) 00:14:48.988 fused_ordering(477) 00:14:48.988 fused_ordering(478) 00:14:48.988 fused_ordering(479) 00:14:48.988 fused_ordering(480) 00:14:48.988 fused_ordering(481) 00:14:48.988 fused_ordering(482) 00:14:48.988 fused_ordering(483) 00:14:48.988 fused_ordering(484) 00:14:48.988 fused_ordering(485) 00:14:48.988 fused_ordering(486) 00:14:48.988 fused_ordering(487) 00:14:48.988 fused_ordering(488) 00:14:48.988 fused_ordering(489) 00:14:48.988 fused_ordering(490) 00:14:48.988 fused_ordering(491) 00:14:48.988 fused_ordering(492) 00:14:48.988 fused_ordering(493) 00:14:48.988 fused_ordering(494) 00:14:48.988 fused_ordering(495) 00:14:48.988 fused_ordering(496) 00:14:48.988 fused_ordering(497) 00:14:48.988 fused_ordering(498) 00:14:48.988 fused_ordering(499) 00:14:48.988 fused_ordering(500) 00:14:48.988 fused_ordering(501) 00:14:48.988 fused_ordering(502) 00:14:48.988 fused_ordering(503) 00:14:48.988 fused_ordering(504) 00:14:48.988 fused_ordering(505) 00:14:48.988 fused_ordering(506) 00:14:48.988 fused_ordering(507) 00:14:48.988 fused_ordering(508) 00:14:48.988 fused_ordering(509) 00:14:48.988 fused_ordering(510) 00:14:48.988 fused_ordering(511) 00:14:48.988 fused_ordering(512) 00:14:48.988 fused_ordering(513) 00:14:48.988 fused_ordering(514) 00:14:48.988 fused_ordering(515) 00:14:48.988 fused_ordering(516) 00:14:48.988 fused_ordering(517) 00:14:48.988 fused_ordering(518) 00:14:48.988 fused_ordering(519) 00:14:48.988 fused_ordering(520) 00:14:48.988 fused_ordering(521) 00:14:48.988 fused_ordering(522) 00:14:48.988 fused_ordering(523) 00:14:48.988 fused_ordering(524) 00:14:48.988 fused_ordering(525) 00:14:48.988 fused_ordering(526) 00:14:48.988 fused_ordering(527) 00:14:48.988 fused_ordering(528) 00:14:48.988 fused_ordering(529) 00:14:48.988 fused_ordering(530) 00:14:48.988 fused_ordering(531) 00:14:48.988 fused_ordering(532) 00:14:48.988 fused_ordering(533) 00:14:48.988 fused_ordering(534) 00:14:48.988 fused_ordering(535) 00:14:48.988 fused_ordering(536) 00:14:48.988 fused_ordering(537) 00:14:48.988 fused_ordering(538) 00:14:48.988 fused_ordering(539) 00:14:48.988 fused_ordering(540) 00:14:48.988 fused_ordering(541) 00:14:48.988 fused_ordering(542) 00:14:48.988 fused_ordering(543) 00:14:48.988 fused_ordering(544) 00:14:48.988 fused_ordering(545) 00:14:48.988 fused_ordering(546) 00:14:48.988 fused_ordering(547) 00:14:48.988 fused_ordering(548) 00:14:48.988 fused_ordering(549) 00:14:48.988 fused_ordering(550) 00:14:48.988 fused_ordering(551) 00:14:48.988 fused_ordering(552) 00:14:48.988 fused_ordering(553) 00:14:48.988 fused_ordering(554) 00:14:48.988 fused_ordering(555) 00:14:48.988 fused_ordering(556) 00:14:48.988 fused_ordering(557) 00:14:48.988 fused_ordering(558) 00:14:48.988 fused_ordering(559) 00:14:48.988 fused_ordering(560) 00:14:48.988 fused_ordering(561) 00:14:48.988 fused_ordering(562) 00:14:48.988 fused_ordering(563) 00:14:48.988 fused_ordering(564) 00:14:48.988 fused_ordering(565) 00:14:48.988 fused_ordering(566) 00:14:48.988 fused_ordering(567) 00:14:48.988 fused_ordering(568) 00:14:48.988 fused_ordering(569) 00:14:48.988 fused_ordering(570) 00:14:48.988 fused_ordering(571) 00:14:48.988 fused_ordering(572) 00:14:48.988 fused_ordering(573) 00:14:48.988 fused_ordering(574) 00:14:48.988 fused_ordering(575) 00:14:48.988 fused_ordering(576) 00:14:48.988 fused_ordering(577) 00:14:48.988 fused_ordering(578) 00:14:48.988 fused_ordering(579) 00:14:48.988 fused_ordering(580) 00:14:48.988 fused_ordering(581) 00:14:48.988 fused_ordering(582) 00:14:48.988 fused_ordering(583) 00:14:48.988 fused_ordering(584) 00:14:48.988 fused_ordering(585) 00:14:48.988 fused_ordering(586) 00:14:48.988 fused_ordering(587) 00:14:48.988 fused_ordering(588) 00:14:48.988 fused_ordering(589) 00:14:48.988 fused_ordering(590) 00:14:48.988 fused_ordering(591) 00:14:48.988 fused_ordering(592) 00:14:48.988 fused_ordering(593) 00:14:48.988 fused_ordering(594) 00:14:48.988 fused_ordering(595) 00:14:48.988 fused_ordering(596) 00:14:48.988 fused_ordering(597) 00:14:48.988 fused_ordering(598) 00:14:48.988 fused_ordering(599) 00:14:48.988 fused_ordering(600) 00:14:48.988 fused_ordering(601) 00:14:48.988 fused_ordering(602) 00:14:48.988 fused_ordering(603) 00:14:48.988 fused_ordering(604) 00:14:48.988 fused_ordering(605) 00:14:48.988 fused_ordering(606) 00:14:48.988 fused_ordering(607) 00:14:48.988 fused_ordering(608) 00:14:48.988 fused_ordering(609) 00:14:48.988 fused_ordering(610) 00:14:48.988 fused_ordering(611) 00:14:48.988 fused_ordering(612) 00:14:48.988 fused_ordering(613) 00:14:48.988 fused_ordering(614) 00:14:48.988 fused_ordering(615) 00:14:49.928 fused_ordering(616) 00:14:49.928 fused_ordering(617) 00:14:49.928 fused_ordering(618) 00:14:49.928 fused_ordering(619) 00:14:49.928 fused_ordering(620) 00:14:49.928 fused_ordering(621) 00:14:49.928 fused_ordering(622) 00:14:49.928 fused_ordering(623) 00:14:49.928 fused_ordering(624) 00:14:49.928 fused_ordering(625) 00:14:49.928 fused_ordering(626) 00:14:49.928 fused_ordering(627) 00:14:49.928 fused_ordering(628) 00:14:49.928 fused_ordering(629) 00:14:49.928 fused_ordering(630) 00:14:49.928 fused_ordering(631) 00:14:49.928 fused_ordering(632) 00:14:49.928 fused_ordering(633) 00:14:49.928 fused_ordering(634) 00:14:49.928 fused_ordering(635) 00:14:49.928 fused_ordering(636) 00:14:49.928 fused_ordering(637) 00:14:49.928 fused_ordering(638) 00:14:49.928 fused_ordering(639) 00:14:49.928 fused_ordering(640) 00:14:49.928 fused_ordering(641) 00:14:49.928 fused_ordering(642) 00:14:49.928 fused_ordering(643) 00:14:49.928 fused_ordering(644) 00:14:49.928 fused_ordering(645) 00:14:49.928 fused_ordering(646) 00:14:49.928 fused_ordering(647) 00:14:49.928 fused_ordering(648) 00:14:49.928 fused_ordering(649) 00:14:49.928 fused_ordering(650) 00:14:49.928 fused_ordering(651) 00:14:49.928 fused_ordering(652) 00:14:49.928 fused_ordering(653) 00:14:49.928 fused_ordering(654) 00:14:49.928 fused_ordering(655) 00:14:49.928 fused_ordering(656) 00:14:49.928 fused_ordering(657) 00:14:49.928 fused_ordering(658) 00:14:49.928 fused_ordering(659) 00:14:49.928 fused_ordering(660) 00:14:49.928 fused_ordering(661) 00:14:49.928 fused_ordering(662) 00:14:49.928 fused_ordering(663) 00:14:49.928 fused_ordering(664) 00:14:49.928 fused_ordering(665) 00:14:49.928 fused_ordering(666) 00:14:49.928 fused_ordering(667) 00:14:49.928 fused_ordering(668) 00:14:49.928 fused_ordering(669) 00:14:49.928 fused_ordering(670) 00:14:49.928 fused_ordering(671) 00:14:49.928 fused_ordering(672) 00:14:49.928 fused_ordering(673) 00:14:49.928 fused_ordering(674) 00:14:49.928 fused_ordering(675) 00:14:49.928 fused_ordering(676) 00:14:49.928 fused_ordering(677) 00:14:49.928 fused_ordering(678) 00:14:49.928 fused_ordering(679) 00:14:49.928 fused_ordering(680) 00:14:49.928 fused_ordering(681) 00:14:49.928 fused_ordering(682) 00:14:49.928 fused_ordering(683) 00:14:49.928 fused_ordering(684) 00:14:49.928 fused_ordering(685) 00:14:49.928 fused_ordering(686) 00:14:49.928 fused_ordering(687) 00:14:49.928 fused_ordering(688) 00:14:49.928 fused_ordering(689) 00:14:49.928 fused_ordering(690) 00:14:49.928 fused_ordering(691) 00:14:49.928 fused_ordering(692) 00:14:49.928 fused_ordering(693) 00:14:49.928 fused_ordering(694) 00:14:49.928 fused_ordering(695) 00:14:49.928 fused_ordering(696) 00:14:49.928 fused_ordering(697) 00:14:49.928 fused_ordering(698) 00:14:49.928 fused_ordering(699) 00:14:49.928 fused_ordering(700) 00:14:49.928 fused_ordering(701) 00:14:49.928 fused_ordering(702) 00:14:49.928 fused_ordering(703) 00:14:49.928 fused_ordering(704) 00:14:49.928 fused_ordering(705) 00:14:49.928 fused_ordering(706) 00:14:49.928 fused_ordering(707) 00:14:49.928 fused_ordering(708) 00:14:49.928 fused_ordering(709) 00:14:49.928 fused_ordering(710) 00:14:49.928 fused_ordering(711) 00:14:49.928 fused_ordering(712) 00:14:49.928 fused_ordering(713) 00:14:49.928 fused_ordering(714) 00:14:49.928 fused_ordering(715) 00:14:49.928 fused_ordering(716) 00:14:49.928 fused_ordering(717) 00:14:49.928 fused_ordering(718) 00:14:49.928 fused_ordering(719) 00:14:49.928 fused_ordering(720) 00:14:49.928 fused_ordering(721) 00:14:49.928 fused_ordering(722) 00:14:49.928 fused_ordering(723) 00:14:49.928 fused_ordering(724) 00:14:49.928 fused_ordering(725) 00:14:49.928 fused_ordering(726) 00:14:49.928 fused_ordering(727) 00:14:49.928 fused_ordering(728) 00:14:49.928 fused_ordering(729) 00:14:49.928 fused_ordering(730) 00:14:49.928 fused_ordering(731) 00:14:49.928 fused_ordering(732) 00:14:49.928 fused_ordering(733) 00:14:49.928 fused_ordering(734) 00:14:49.928 fused_ordering(735) 00:14:49.928 fused_ordering(736) 00:14:49.928 fused_ordering(737) 00:14:49.928 fused_ordering(738) 00:14:49.928 fused_ordering(739) 00:14:49.928 fused_ordering(740) 00:14:49.928 fused_ordering(741) 00:14:49.928 fused_ordering(742) 00:14:49.928 fused_ordering(743) 00:14:49.928 fused_ordering(744) 00:14:49.928 fused_ordering(745) 00:14:49.928 fused_ordering(746) 00:14:49.928 fused_ordering(747) 00:14:49.928 fused_ordering(748) 00:14:49.928 fused_ordering(749) 00:14:49.928 fused_ordering(750) 00:14:49.928 fused_ordering(751) 00:14:49.928 fused_ordering(752) 00:14:49.928 fused_ordering(753) 00:14:49.928 fused_ordering(754) 00:14:49.928 fused_ordering(755) 00:14:49.928 fused_ordering(756) 00:14:49.928 fused_ordering(757) 00:14:49.928 fused_ordering(758) 00:14:49.928 fused_ordering(759) 00:14:49.928 fused_ordering(760) 00:14:49.928 fused_ordering(761) 00:14:49.928 fused_ordering(762) 00:14:49.928 fused_ordering(763) 00:14:49.928 fused_ordering(764) 00:14:49.928 fused_ordering(765) 00:14:49.928 fused_ordering(766) 00:14:49.928 fused_ordering(767) 00:14:49.928 fused_ordering(768) 00:14:49.928 fused_ordering(769) 00:14:49.928 fused_ordering(770) 00:14:49.928 fused_ordering(771) 00:14:49.928 fused_ordering(772) 00:14:49.928 fused_ordering(773) 00:14:49.928 fused_ordering(774) 00:14:49.928 fused_ordering(775) 00:14:49.928 fused_ordering(776) 00:14:49.928 fused_ordering(777) 00:14:49.928 fused_ordering(778) 00:14:49.928 fused_ordering(779) 00:14:49.928 fused_ordering(780) 00:14:49.928 fused_ordering(781) 00:14:49.928 fused_ordering(782) 00:14:49.928 fused_ordering(783) 00:14:49.928 fused_ordering(784) 00:14:49.928 fused_ordering(785) 00:14:49.928 fused_ordering(786) 00:14:49.928 fused_ordering(787) 00:14:49.928 fused_ordering(788) 00:14:49.928 fused_ordering(789) 00:14:49.928 fused_ordering(790) 00:14:49.928 fused_ordering(791) 00:14:49.928 fused_ordering(792) 00:14:49.928 fused_ordering(793) 00:14:49.928 fused_ordering(794) 00:14:49.929 fused_ordering(795) 00:14:49.929 fused_ordering(796) 00:14:49.929 fused_ordering(797) 00:14:49.929 fused_ordering(798) 00:14:49.929 fused_ordering(799) 00:14:49.929 fused_ordering(800) 00:14:49.929 fused_ordering(801) 00:14:49.929 fused_ordering(802) 00:14:49.929 fused_ordering(803) 00:14:49.929 fused_ordering(804) 00:14:49.929 fused_ordering(805) 00:14:49.929 fused_ordering(806) 00:14:49.929 fused_ordering(807) 00:14:49.929 fused_ordering(808) 00:14:49.929 fused_ordering(809) 00:14:49.929 fused_ordering(810) 00:14:49.929 fused_ordering(811) 00:14:49.929 fused_ordering(812) 00:14:49.929 fused_ordering(813) 00:14:49.929 fused_ordering(814) 00:14:49.929 fused_ordering(815) 00:14:49.929 fused_ordering(816) 00:14:49.929 fused_ordering(817) 00:14:49.929 fused_ordering(818) 00:14:49.929 fused_ordering(819) 00:14:49.929 fused_ordering(820) 00:14:50.869 fused_ordering(821) 00:14:50.869 fused_ordering(822) 00:14:50.869 fused_ordering(823) 00:14:50.869 fused_ordering(824) 00:14:50.869 fused_ordering(825) 00:14:50.869 fused_ordering(826) 00:14:50.869 fused_ordering(827) 00:14:50.869 fused_ordering(828) 00:14:50.869 fused_ordering(829) 00:14:50.869 fused_ordering(830) 00:14:50.869 fused_ordering(831) 00:14:50.869 fused_ordering(832) 00:14:50.869 fused_ordering(833) 00:14:50.869 fused_ordering(834) 00:14:50.869 fused_ordering(835) 00:14:50.869 fused_ordering(836) 00:14:50.869 fused_ordering(837) 00:14:50.869 fused_ordering(838) 00:14:50.869 fused_ordering(839) 00:14:50.869 fused_ordering(840) 00:14:50.869 fused_ordering(841) 00:14:50.869 fused_ordering(842) 00:14:50.869 fused_ordering(843) 00:14:50.869 fused_ordering(844) 00:14:50.869 fused_ordering(845) 00:14:50.869 fused_ordering(846) 00:14:50.869 fused_ordering(847) 00:14:50.869 fused_ordering(848) 00:14:50.869 fused_ordering(849) 00:14:50.869 fused_ordering(850) 00:14:50.869 fused_ordering(851) 00:14:50.869 fused_ordering(852) 00:14:50.869 fused_ordering(853) 00:14:50.869 fused_ordering(854) 00:14:50.869 fused_ordering(855) 00:14:50.869 fused_ordering(856) 00:14:50.869 fused_ordering(857) 00:14:50.869 fused_ordering(858) 00:14:50.869 fused_ordering(859) 00:14:50.869 fused_ordering(860) 00:14:50.869 fused_ordering(861) 00:14:50.869 fused_ordering(862) 00:14:50.869 fused_ordering(863) 00:14:50.869 fused_ordering(864) 00:14:50.869 fused_ordering(865) 00:14:50.869 fused_ordering(866) 00:14:50.869 fused_ordering(867) 00:14:50.869 fused_ordering(868) 00:14:50.869 fused_ordering(869) 00:14:50.869 fused_ordering(870) 00:14:50.869 fused_ordering(871) 00:14:50.869 fused_ordering(872) 00:14:50.869 fused_ordering(873) 00:14:50.869 fused_ordering(874) 00:14:50.869 fused_ordering(875) 00:14:50.869 fused_ordering(876) 00:14:50.869 fused_ordering(877) 00:14:50.869 fused_ordering(878) 00:14:50.869 fused_ordering(879) 00:14:50.869 fused_ordering(880) 00:14:50.869 fused_ordering(881) 00:14:50.869 fused_ordering(882) 00:14:50.869 fused_ordering(883) 00:14:50.869 fused_ordering(884) 00:14:50.869 fused_ordering(885) 00:14:50.870 fused_ordering(886) 00:14:50.870 fused_ordering(887) 00:14:50.870 fused_ordering(888) 00:14:50.870 fused_ordering(889) 00:14:50.870 fused_ordering(890) 00:14:50.870 fused_ordering(891) 00:14:50.870 fused_ordering(892) 00:14:50.870 fused_ordering(893) 00:14:50.870 fused_ordering(894) 00:14:50.870 fused_ordering(895) 00:14:50.870 fused_ordering(896) 00:14:50.870 fused_ordering(897) 00:14:50.870 fused_ordering(898) 00:14:50.870 fused_ordering(899) 00:14:50.870 fused_ordering(900) 00:14:50.870 fused_ordering(901) 00:14:50.870 fused_ordering(902) 00:14:50.870 fused_ordering(903) 00:14:50.870 fused_ordering(904) 00:14:50.870 fused_ordering(905) 00:14:50.870 fused_ordering(906) 00:14:50.870 fused_ordering(907) 00:14:50.870 fused_ordering(908) 00:14:50.870 fused_ordering(909) 00:14:50.870 fused_ordering(910) 00:14:50.870 fused_ordering(911) 00:14:50.870 fused_ordering(912) 00:14:50.870 fused_ordering(913) 00:14:50.870 fused_ordering(914) 00:14:50.870 fused_ordering(915) 00:14:50.870 fused_ordering(916) 00:14:50.870 fused_ordering(917) 00:14:50.870 fused_ordering(918) 00:14:50.870 fused_ordering(919) 00:14:50.870 fused_ordering(920) 00:14:50.870 fused_ordering(921) 00:14:50.870 fused_ordering(922) 00:14:50.870 fused_ordering(923) 00:14:50.870 fused_ordering(924) 00:14:50.870 fused_ordering(925) 00:14:50.870 fused_ordering(926) 00:14:50.870 fused_ordering(927) 00:14:50.870 fused_ordering(928) 00:14:50.870 fused_ordering(929) 00:14:50.870 fused_ordering(930) 00:14:50.870 fused_ordering(931) 00:14:50.870 fused_ordering(932) 00:14:50.870 fused_ordering(933) 00:14:50.870 fused_ordering(934) 00:14:50.870 fused_ordering(935) 00:14:50.870 fused_ordering(936) 00:14:50.870 fused_ordering(937) 00:14:50.870 fused_ordering(938) 00:14:50.870 fused_ordering(939) 00:14:50.870 fused_ordering(940) 00:14:50.870 fused_ordering(941) 00:14:50.870 fused_ordering(942) 00:14:50.870 fused_ordering(943) 00:14:50.870 fused_ordering(944) 00:14:50.870 fused_ordering(945) 00:14:50.870 fused_ordering(946) 00:14:50.870 fused_ordering(947) 00:14:50.870 fused_ordering(948) 00:14:50.870 fused_ordering(949) 00:14:50.870 fused_ordering(950) 00:14:50.870 fused_ordering(951) 00:14:50.870 fused_ordering(952) 00:14:50.870 fused_ordering(953) 00:14:50.870 fused_ordering(954) 00:14:50.870 fused_ordering(955) 00:14:50.870 fused_ordering(956) 00:14:50.870 fused_ordering(957) 00:14:50.870 fused_ordering(958) 00:14:50.870 fused_ordering(959) 00:14:50.870 fused_ordering(960) 00:14:50.870 fused_ordering(961) 00:14:50.870 fused_ordering(962) 00:14:50.870 fused_ordering(963) 00:14:50.870 fused_ordering(964) 00:14:50.870 fused_ordering(965) 00:14:50.870 fused_ordering(966) 00:14:50.870 fused_ordering(967) 00:14:50.870 fused_ordering(968) 00:14:50.870 fused_ordering(969) 00:14:50.870 fused_ordering(970) 00:14:50.870 fused_ordering(971) 00:14:50.870 fused_ordering(972) 00:14:50.870 fused_ordering(973) 00:14:50.870 fused_ordering(974) 00:14:50.870 fused_ordering(975) 00:14:50.870 fused_ordering(976) 00:14:50.870 fused_ordering(977) 00:14:50.870 fused_ordering(978) 00:14:50.870 fused_ordering(979) 00:14:50.870 fused_ordering(980) 00:14:50.870 fused_ordering(981) 00:14:50.870 fused_ordering(982) 00:14:50.870 fused_ordering(983) 00:14:50.870 fused_ordering(984) 00:14:50.870 fused_ordering(985) 00:14:50.870 fused_ordering(986) 00:14:50.870 fused_ordering(987) 00:14:50.870 fused_ordering(988) 00:14:50.870 fused_ordering(989) 00:14:50.870 fused_ordering(990) 00:14:50.870 fused_ordering(991) 00:14:50.870 fused_ordering(992) 00:14:50.870 fused_ordering(993) 00:14:50.870 fused_ordering(994) 00:14:50.870 fused_ordering(995) 00:14:50.870 fused_ordering(996) 00:14:50.870 fused_ordering(997) 00:14:50.870 fused_ordering(998) 00:14:50.870 fused_ordering(999) 00:14:50.870 fused_ordering(1000) 00:14:50.870 fused_ordering(1001) 00:14:50.870 fused_ordering(1002) 00:14:50.870 fused_ordering(1003) 00:14:50.870 fused_ordering(1004) 00:14:50.870 fused_ordering(1005) 00:14:50.870 fused_ordering(1006) 00:14:50.870 fused_ordering(1007) 00:14:50.870 fused_ordering(1008) 00:14:50.870 fused_ordering(1009) 00:14:50.870 fused_ordering(1010) 00:14:50.870 fused_ordering(1011) 00:14:50.870 fused_ordering(1012) 00:14:50.870 fused_ordering(1013) 00:14:50.870 fused_ordering(1014) 00:14:50.870 fused_ordering(1015) 00:14:50.870 fused_ordering(1016) 00:14:50.870 fused_ordering(1017) 00:14:50.870 fused_ordering(1018) 00:14:50.870 fused_ordering(1019) 00:14:50.870 fused_ordering(1020) 00:14:50.870 fused_ordering(1021) 00:14:50.870 fused_ordering(1022) 00:14:50.870 fused_ordering(1023) 00:14:50.870 08:49:07 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:50.870 08:49:07 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:50.870 08:49:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:50.870 08:49:07 -- nvmf/common.sh@117 -- # sync 00:14:50.870 08:49:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:50.870 08:49:07 -- nvmf/common.sh@120 -- # set +e 00:14:50.870 08:49:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.870 08:49:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:50.870 rmmod nvme_tcp 00:14:50.870 rmmod nvme_fabrics 00:14:50.870 rmmod nvme_keyring 00:14:50.870 08:49:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.870 08:49:08 -- nvmf/common.sh@124 -- # set -e 00:14:50.870 08:49:08 -- nvmf/common.sh@125 -- # return 0 00:14:50.870 08:49:08 -- nvmf/common.sh@478 -- # '[' -n 2003399 ']' 00:14:50.870 08:49:08 -- nvmf/common.sh@479 -- # killprocess 2003399 00:14:50.870 08:49:08 -- common/autotest_common.sh@936 -- # '[' -z 2003399 ']' 00:14:50.870 08:49:08 -- common/autotest_common.sh@940 -- # kill -0 2003399 00:14:50.870 08:49:08 -- common/autotest_common.sh@941 -- # uname 00:14:50.870 08:49:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.870 08:49:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2003399 00:14:50.870 08:49:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:50.870 08:49:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:50.870 08:49:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2003399' 00:14:50.870 killing process with pid 2003399 00:14:50.870 08:49:08 -- common/autotest_common.sh@955 -- # kill 2003399 00:14:50.870 08:49:08 -- common/autotest_common.sh@960 -- # wait 2003399 00:14:51.128 08:49:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:51.128 08:49:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:51.128 08:49:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:51.128 08:49:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:51.128 08:49:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:51.128 08:49:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.128 08:49:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.128 08:49:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.667 08:49:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:53.667 00:14:53.667 real 0m14.917s 00:14:53.667 user 0m9.016s 00:14:53.667 sys 0m8.684s 00:14:53.667 08:49:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:53.667 08:49:10 -- common/autotest_common.sh@10 -- # set +x 00:14:53.667 ************************************ 00:14:53.667 END TEST nvmf_fused_ordering 00:14:53.667 ************************************ 00:14:53.667 08:49:10 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:53.667 08:49:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:53.667 08:49:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.667 08:49:10 -- common/autotest_common.sh@10 -- # set +x 00:14:53.667 ************************************ 00:14:53.667 START TEST nvmf_delete_subsystem 00:14:53.667 ************************************ 00:14:53.667 08:49:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:53.667 * Looking for test storage... 00:14:53.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:53.667 08:49:10 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.667 08:49:10 -- nvmf/common.sh@7 -- # uname -s 00:14:53.667 08:49:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.667 08:49:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.667 08:49:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.667 08:49:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.667 08:49:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.667 08:49:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.667 08:49:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.667 08:49:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.667 08:49:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.667 08:49:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.667 08:49:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:53.667 08:49:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:53.667 08:49:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.667 08:49:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.667 08:49:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.667 08:49:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.667 08:49:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:53.667 08:49:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.667 08:49:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.667 08:49:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.667 08:49:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.667 08:49:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.667 08:49:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.667 08:49:10 -- paths/export.sh@5 -- # export PATH 00:14:53.667 08:49:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.667 08:49:10 -- nvmf/common.sh@47 -- # : 0 00:14:53.667 08:49:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.667 08:49:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.667 08:49:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.667 08:49:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.667 08:49:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.667 08:49:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.667 08:49:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.667 08:49:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.667 08:49:10 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:53.667 08:49:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:53.667 08:49:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.667 08:49:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:53.667 08:49:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:53.667 08:49:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:53.667 08:49:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.667 08:49:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.667 08:49:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.667 08:49:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:53.667 08:49:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:53.667 08:49:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:53.667 08:49:10 -- common/autotest_common.sh@10 -- # set +x 00:15:00.243 08:49:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:00.243 08:49:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:00.243 08:49:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:00.243 08:49:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:00.243 08:49:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:00.243 08:49:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:00.243 08:49:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:00.243 08:49:17 -- nvmf/common.sh@295 -- # net_devs=() 00:15:00.243 08:49:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:00.243 08:49:17 -- nvmf/common.sh@296 -- # e810=() 00:15:00.243 08:49:17 -- nvmf/common.sh@296 -- # local -ga e810 00:15:00.243 08:49:17 -- nvmf/common.sh@297 -- # x722=() 00:15:00.243 08:49:17 -- nvmf/common.sh@297 -- # local -ga x722 00:15:00.243 08:49:17 -- nvmf/common.sh@298 -- # mlx=() 00:15:00.243 08:49:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:00.243 08:49:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.243 08:49:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.243 08:49:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.243 08:49:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.243 08:49:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.243 08:49:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.243 08:49:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.243 08:49:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.243 08:49:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.243 08:49:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.243 08:49:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.243 08:49:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:00.243 08:49:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:00.243 08:49:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:00.243 08:49:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:00.243 08:49:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:00.243 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:00.243 08:49:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:00.243 08:49:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:00.243 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:00.243 08:49:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:00.243 08:49:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:00.243 08:49:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:00.243 08:49:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.243 08:49:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:00.243 08:49:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.243 08:49:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:00.243 Found net devices under 0000:af:00.0: cvl_0_0 00:15:00.243 08:49:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.243 08:49:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:00.243 08:49:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.243 08:49:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:00.243 08:49:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.243 08:49:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:00.243 Found net devices under 0000:af:00.1: cvl_0_1 00:15:00.243 08:49:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.243 08:49:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:00.244 08:49:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:00.244 08:49:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:00.244 08:49:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:00.244 08:49:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:00.244 08:49:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.244 08:49:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.244 08:49:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.244 08:49:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:00.244 08:49:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.244 08:49:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.244 08:49:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:00.244 08:49:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.244 08:49:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.244 08:49:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:00.244 08:49:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:00.244 08:49:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.244 08:49:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.244 08:49:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.244 08:49:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.244 08:49:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:00.244 08:49:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.244 08:49:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.244 08:49:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.244 08:49:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:00.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:15:00.244 00:15:00.244 --- 10.0.0.2 ping statistics --- 00:15:00.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.244 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:15:00.244 08:49:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:15:00.244 00:15:00.244 --- 10.0.0.1 ping statistics --- 00:15:00.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.244 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:15:00.244 08:49:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.244 08:49:17 -- nvmf/common.sh@411 -- # return 0 00:15:00.244 08:49:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:00.244 08:49:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.244 08:49:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:00.244 08:49:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:00.244 08:49:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.244 08:49:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:00.244 08:49:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:00.244 08:49:17 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:00.244 08:49:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:00.244 08:49:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:00.244 08:49:17 -- common/autotest_common.sh@10 -- # set +x 00:15:00.504 08:49:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:00.504 08:49:17 -- nvmf/common.sh@470 -- # nvmfpid=2008114 00:15:00.504 08:49:17 -- nvmf/common.sh@471 -- # waitforlisten 2008114 00:15:00.504 08:49:17 -- common/autotest_common.sh@817 -- # '[' -z 2008114 ']' 00:15:00.504 08:49:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.504 08:49:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:00.504 08:49:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.504 08:49:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:00.504 08:49:17 -- common/autotest_common.sh@10 -- # set +x 00:15:00.504 [2024-04-26 08:49:17.527328] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:15:00.504 [2024-04-26 08:49:17.527374] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.504 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.504 [2024-04-26 08:49:17.601084] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:00.504 [2024-04-26 08:49:17.683959] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.504 [2024-04-26 08:49:17.683994] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.504 [2024-04-26 08:49:17.684004] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.504 [2024-04-26 08:49:17.684012] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.504 [2024-04-26 08:49:17.684019] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.504 [2024-04-26 08:49:17.684059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.504 [2024-04-26 08:49:17.684061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.442 08:49:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:01.442 08:49:18 -- common/autotest_common.sh@850 -- # return 0 00:15:01.442 08:49:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:01.442 08:49:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:01.442 08:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:01.442 08:49:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.442 08:49:18 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:01.442 08:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.442 08:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:01.442 [2024-04-26 08:49:18.392968] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.442 08:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.442 08:49:18 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:01.442 08:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.442 08:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:01.442 08:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.442 08:49:18 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.442 08:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.442 08:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:01.442 [2024-04-26 08:49:18.409117] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.442 08:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.442 08:49:18 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:01.442 08:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.442 08:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:01.442 NULL1 00:15:01.442 08:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.442 08:49:18 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:01.442 08:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.442 08:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:01.442 Delay0 00:15:01.442 08:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.442 08:49:18 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.442 08:49:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:01.442 08:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:01.442 08:49:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:01.442 08:49:18 -- target/delete_subsystem.sh@28 -- # perf_pid=2008202 00:15:01.442 08:49:18 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:01.442 08:49:18 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:01.442 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.442 [2024-04-26 08:49:18.493720] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:03.348 08:49:20 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.348 08:49:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:03.348 08:49:20 -- common/autotest_common.sh@10 -- # set +x 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 starting I/O failed: -6 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 [2024-04-26 08:49:20.584197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221eb0 is same with the state(5) to be set 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Write completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.348 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 starting I/O failed: -6 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 starting I/O failed: -6 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 starting I/O failed: -6 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 starting I/O failed: -6 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 starting I/O failed: -6 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 starting I/O failed: -6 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 starting I/O failed: -6 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 starting I/O failed: -6 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 starting I/O failed: -6 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 [2024-04-26 08:49:20.585668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe91c000c00 is same with the state(5) to be set 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Read completed with error (sct=0, sc=8) 00:15:03.349 Write completed with error (sct=0, sc=8) 00:15:04.727 [2024-04-26 08:49:21.550780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1222500 is same with the state(5) to be set 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 [2024-04-26 08:49:21.587729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1221d20 is same with the state(5) to be set 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 [2024-04-26 08:49:21.588287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe91c00bf90 is same with the state(5) to be set 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 [2024-04-26 08:49:21.588458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe91c00c690 is same with the state(5) to be set 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Write completed with error (sct=0, sc=8) 00:15:04.727 Read completed with error (sct=0, sc=8) 00:15:04.727 [2024-04-26 08:49:21.588600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121fa60 is same with the state(5) to be set 00:15:04.727 [2024-04-26 08:49:21.589428] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1222500 (9): Bad file descriptor 00:15:04.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:04.727 08:49:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:04.727 08:49:21 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:04.727 08:49:21 -- target/delete_subsystem.sh@35 -- # kill -0 2008202 00:15:04.727 08:49:21 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:04.727 Initializing NVMe Controllers 00:15:04.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.727 Controller IO queue size 128, less than required. 00:15:04.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:04.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:04.728 Initialization complete. Launching workers. 00:15:04.728 ======================================================== 00:15:04.728 Latency(us) 00:15:04.728 Device Information : IOPS MiB/s Average min max 00:15:04.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.77 0.09 885989.93 352.87 1012565.76 00:15:04.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.44 0.07 1013111.05 242.82 2002341.07 00:15:04.728 ======================================================== 00:15:04.728 Total : 325.21 0.16 944795.58 242.82 2002341.07 00:15:04.728 00:15:04.986 08:49:22 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:04.986 08:49:22 -- target/delete_subsystem.sh@35 -- # kill -0 2008202 00:15:04.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2008202) - No such process 00:15:04.986 08:49:22 -- target/delete_subsystem.sh@45 -- # NOT wait 2008202 00:15:04.986 08:49:22 -- common/autotest_common.sh@638 -- # local es=0 00:15:04.986 08:49:22 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 2008202 00:15:04.986 08:49:22 -- common/autotest_common.sh@626 -- # local arg=wait 00:15:04.986 08:49:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:04.986 08:49:22 -- common/autotest_common.sh@630 -- # type -t wait 00:15:04.986 08:49:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:04.986 08:49:22 -- common/autotest_common.sh@641 -- # wait 2008202 00:15:04.986 08:49:22 -- common/autotest_common.sh@641 -- # es=1 00:15:04.986 08:49:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:04.986 08:49:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:04.986 08:49:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:04.986 08:49:22 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:04.986 08:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:04.986 08:49:22 -- common/autotest_common.sh@10 -- # set +x 00:15:04.986 08:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:04.986 08:49:22 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.986 08:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:04.986 08:49:22 -- common/autotest_common.sh@10 -- # set +x 00:15:04.986 [2024-04-26 08:49:22.115559] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.987 08:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:04.987 08:49:22 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.987 08:49:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:04.987 08:49:22 -- common/autotest_common.sh@10 -- # set +x 00:15:04.987 08:49:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:04.987 08:49:22 -- target/delete_subsystem.sh@54 -- # perf_pid=2008935 00:15:04.987 08:49:22 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:04.987 08:49:22 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:04.987 08:49:22 -- target/delete_subsystem.sh@57 -- # kill -0 2008935 00:15:04.987 08:49:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.987 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.987 [2024-04-26 08:49:22.185421] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:05.554 08:49:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.554 08:49:22 -- target/delete_subsystem.sh@57 -- # kill -0 2008935 00:15:05.554 08:49:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.121 08:49:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.121 08:49:23 -- target/delete_subsystem.sh@57 -- # kill -0 2008935 00:15:06.121 08:49:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.689 08:49:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.689 08:49:23 -- target/delete_subsystem.sh@57 -- # kill -0 2008935 00:15:06.689 08:49:23 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:06.948 08:49:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:06.948 08:49:24 -- target/delete_subsystem.sh@57 -- # kill -0 2008935 00:15:06.948 08:49:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:07.516 08:49:24 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:07.516 08:49:24 -- target/delete_subsystem.sh@57 -- # kill -0 2008935 00:15:07.516 08:49:24 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:08.084 08:49:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:08.084 08:49:25 -- target/delete_subsystem.sh@57 -- # kill -0 2008935 00:15:08.084 08:49:25 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:08.084 [2024-04-26 08:49:25.322266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58f80 is same with the state(5) to be set 00:15:08.084 [2024-04-26 08:49:25.322292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58f80 is same with the state(5) to be set 00:15:08.084 [2024-04-26 08:49:25.322302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58f80 is same with the state(5) to be set 00:15:08.084 [2024-04-26 08:49:25.322311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58f80 is same with the state(5) to be set 00:15:08.084 [2024-04-26 08:49:25.322319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58f80 is same with the state(5) to be set 00:15:08.084 [2024-04-26 08:49:25.322328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58f80 is same with the state(5) to be set 00:15:08.084 Initializing NVMe Controllers 00:15:08.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.084 Controller IO queue size 128, less than required. 00:15:08.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:08.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:08.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:08.084 Initialization complete. Launching workers. 00:15:08.084 ======================================================== 00:15:08.084 Latency(us) 00:15:08.084 Device Information : IOPS MiB/s Average min max 00:15:08.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003686.23 1000364.23 1010286.43 00:15:08.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005398.80 1000534.02 1013110.18 00:15:08.084 ======================================================== 00:15:08.084 Total : 256.00 0.12 1004542.52 1000364.23 1013110.18 00:15:08.084 00:15:08.653 08:49:25 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:08.653 08:49:25 -- target/delete_subsystem.sh@57 -- # kill -0 2008935 00:15:08.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2008935) - No such process 00:15:08.653 08:49:25 -- target/delete_subsystem.sh@67 -- # wait 2008935 00:15:08.653 08:49:25 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:08.653 08:49:25 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:08.653 08:49:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:08.653 08:49:25 -- nvmf/common.sh@117 -- # sync 00:15:08.653 08:49:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:08.653 08:49:25 -- nvmf/common.sh@120 -- # set +e 00:15:08.653 08:49:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:08.653 08:49:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:08.653 rmmod nvme_tcp 00:15:08.653 rmmod nvme_fabrics 00:15:08.653 rmmod nvme_keyring 00:15:08.653 08:49:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:08.653 08:49:25 -- nvmf/common.sh@124 -- # set -e 00:15:08.653 08:49:25 -- nvmf/common.sh@125 -- # return 0 00:15:08.653 08:49:25 -- nvmf/common.sh@478 -- # '[' -n 2008114 ']' 00:15:08.653 08:49:25 -- nvmf/common.sh@479 -- # killprocess 2008114 00:15:08.653 08:49:25 -- common/autotest_common.sh@936 -- # '[' -z 2008114 ']' 00:15:08.653 08:49:25 -- common/autotest_common.sh@940 -- # kill -0 2008114 00:15:08.653 08:49:25 -- common/autotest_common.sh@941 -- # uname 00:15:08.653 08:49:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.653 08:49:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2008114 00:15:08.653 08:49:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:08.653 08:49:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:08.653 08:49:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2008114' 00:15:08.653 killing process with pid 2008114 00:15:08.653 08:49:25 -- common/autotest_common.sh@955 -- # kill 2008114 00:15:08.653 08:49:25 -- common/autotest_common.sh@960 -- # wait 2008114 00:15:08.912 08:49:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:08.912 08:49:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:08.912 08:49:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:08.912 08:49:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.912 08:49:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:08.912 08:49:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.912 08:49:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.912 08:49:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.819 08:49:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:11.078 00:15:11.078 real 0m17.515s 00:15:11.078 user 0m29.594s 00:15:11.078 sys 0m7.050s 00:15:11.078 08:49:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:11.078 08:49:28 -- common/autotest_common.sh@10 -- # set +x 00:15:11.078 ************************************ 00:15:11.078 END TEST nvmf_delete_subsystem 00:15:11.078 ************************************ 00:15:11.078 08:49:28 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:11.078 08:49:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:11.078 08:49:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:11.078 08:49:28 -- common/autotest_common.sh@10 -- # set +x 00:15:11.078 ************************************ 00:15:11.078 START TEST nvmf_ns_masking 00:15:11.078 ************************************ 00:15:11.078 08:49:28 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:11.337 * Looking for test storage... 00:15:11.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.337 08:49:28 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.337 08:49:28 -- nvmf/common.sh@7 -- # uname -s 00:15:11.337 08:49:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.337 08:49:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.337 08:49:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.337 08:49:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.337 08:49:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.337 08:49:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.337 08:49:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.338 08:49:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.338 08:49:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.338 08:49:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.338 08:49:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:11.338 08:49:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:11.338 08:49:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.338 08:49:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.338 08:49:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.338 08:49:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.338 08:49:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.338 08:49:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.338 08:49:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.338 08:49:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.338 08:49:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.338 08:49:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.338 08:49:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.338 08:49:28 -- paths/export.sh@5 -- # export PATH 00:15:11.338 08:49:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.338 08:49:28 -- nvmf/common.sh@47 -- # : 0 00:15:11.338 08:49:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.338 08:49:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.338 08:49:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.338 08:49:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.338 08:49:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.338 08:49:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.338 08:49:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.338 08:49:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.338 08:49:28 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.338 08:49:28 -- target/ns_masking.sh@11 -- # loops=5 00:15:11.338 08:49:28 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:11.338 08:49:28 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:11.338 08:49:28 -- target/ns_masking.sh@15 -- # uuidgen 00:15:11.338 08:49:28 -- target/ns_masking.sh@15 -- # HOSTID=692d1c14-c634-4dcc-897b-16765f6ed900 00:15:11.338 08:49:28 -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:11.338 08:49:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:11.338 08:49:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.338 08:49:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:11.338 08:49:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:11.338 08:49:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:11.338 08:49:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.338 08:49:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.338 08:49:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.338 08:49:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:11.338 08:49:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:11.338 08:49:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:11.338 08:49:28 -- common/autotest_common.sh@10 -- # set +x 00:15:17.913 08:49:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:17.913 08:49:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:17.913 08:49:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:17.913 08:49:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:17.913 08:49:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:17.913 08:49:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:17.913 08:49:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:17.913 08:49:34 -- nvmf/common.sh@295 -- # net_devs=() 00:15:17.913 08:49:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:17.913 08:49:34 -- nvmf/common.sh@296 -- # e810=() 00:15:17.913 08:49:34 -- nvmf/common.sh@296 -- # local -ga e810 00:15:17.913 08:49:34 -- nvmf/common.sh@297 -- # x722=() 00:15:17.913 08:49:34 -- nvmf/common.sh@297 -- # local -ga x722 00:15:17.913 08:49:34 -- nvmf/common.sh@298 -- # mlx=() 00:15:17.913 08:49:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:17.913 08:49:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.913 08:49:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.913 08:49:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.913 08:49:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.913 08:49:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.913 08:49:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.913 08:49:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.913 08:49:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.913 08:49:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.913 08:49:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.913 08:49:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.913 08:49:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:17.913 08:49:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:17.913 08:49:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:17.913 08:49:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.913 08:49:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:17.913 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:17.913 08:49:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:17.913 08:49:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:17.913 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:17.913 08:49:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:17.913 08:49:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.913 08:49:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.913 08:49:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:17.913 08:49:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.913 08:49:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:17.913 Found net devices under 0000:af:00.0: cvl_0_0 00:15:17.913 08:49:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.913 08:49:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:17.913 08:49:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.913 08:49:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:17.913 08:49:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.913 08:49:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:17.913 Found net devices under 0000:af:00.1: cvl_0_1 00:15:17.913 08:49:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.913 08:49:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:17.913 08:49:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:17.913 08:49:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:17.913 08:49:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:17.913 08:49:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.913 08:49:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.913 08:49:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.913 08:49:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:17.913 08:49:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.913 08:49:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.913 08:49:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:17.913 08:49:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.913 08:49:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.913 08:49:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:17.913 08:49:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:17.913 08:49:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.913 08:49:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.914 08:49:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.914 08:49:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.914 08:49:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:17.914 08:49:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:18.173 08:49:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:18.173 08:49:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:18.173 08:49:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:18.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:15:18.173 00:15:18.173 --- 10.0.0.2 ping statistics --- 00:15:18.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.173 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:15:18.173 08:49:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:18.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:15:18.173 00:15:18.173 --- 10.0.0.1 ping statistics --- 00:15:18.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.173 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:15:18.173 08:49:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.173 08:49:35 -- nvmf/common.sh@411 -- # return 0 00:15:18.173 08:49:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:18.173 08:49:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.173 08:49:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:18.173 08:49:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:18.173 08:49:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.173 08:49:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:18.173 08:49:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:18.173 08:49:35 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:18.173 08:49:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:18.173 08:49:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:18.173 08:49:35 -- common/autotest_common.sh@10 -- # set +x 00:15:18.173 08:49:35 -- nvmf/common.sh@470 -- # nvmfpid=2013240 00:15:18.173 08:49:35 -- nvmf/common.sh@471 -- # waitforlisten 2013240 00:15:18.173 08:49:35 -- common/autotest_common.sh@817 -- # '[' -z 2013240 ']' 00:15:18.173 08:49:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.173 08:49:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.173 08:49:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.173 08:49:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.173 08:49:35 -- common/autotest_common.sh@10 -- # set +x 00:15:18.173 08:49:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:18.173 [2024-04-26 08:49:35.310962] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:15:18.173 [2024-04-26 08:49:35.311011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.173 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.173 [2024-04-26 08:49:35.384726] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.433 [2024-04-26 08:49:35.458495] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.433 [2024-04-26 08:49:35.458535] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.433 [2024-04-26 08:49:35.458544] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.433 [2024-04-26 08:49:35.458552] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.433 [2024-04-26 08:49:35.458559] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.433 [2024-04-26 08:49:35.458652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.433 [2024-04-26 08:49:35.458768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.433 [2024-04-26 08:49:35.458854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.433 [2024-04-26 08:49:35.458856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.001 08:49:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:19.001 08:49:36 -- common/autotest_common.sh@850 -- # return 0 00:15:19.001 08:49:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:19.001 08:49:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:19.001 08:49:36 -- common/autotest_common.sh@10 -- # set +x 00:15:19.001 08:49:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.001 08:49:36 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:19.261 [2024-04-26 08:49:36.313730] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.261 08:49:36 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:19.261 08:49:36 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:19.261 08:49:36 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:19.261 Malloc1 00:15:19.520 08:49:36 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:19.520 Malloc2 00:15:19.520 08:49:36 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:19.779 08:49:36 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:20.038 08:49:37 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.038 [2024-04-26 08:49:37.208543] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.038 08:49:37 -- target/ns_masking.sh@61 -- # connect 00:15:20.038 08:49:37 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 692d1c14-c634-4dcc-897b-16765f6ed900 -a 10.0.0.2 -s 4420 -i 4 00:15:20.297 08:49:37 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:20.297 08:49:37 -- common/autotest_common.sh@1184 -- # local i=0 00:15:20.297 08:49:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.297 08:49:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:20.297 08:49:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:22.835 08:49:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:22.835 08:49:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:22.835 08:49:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:22.835 08:49:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:22.835 08:49:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:22.835 08:49:39 -- common/autotest_common.sh@1194 -- # return 0 00:15:22.835 08:49:39 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:22.835 08:49:39 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:22.835 08:49:39 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:22.835 08:49:39 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:22.835 08:49:39 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:22.835 08:49:39 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:22.835 08:49:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:22.835 [ 0]:0x1 00:15:22.835 08:49:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:22.835 08:49:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:22.835 08:49:39 -- target/ns_masking.sh@40 -- # nguid=e44e991149a0427483579459aad5db46 00:15:22.835 08:49:39 -- target/ns_masking.sh@41 -- # [[ e44e991149a0427483579459aad5db46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.835 08:49:39 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:22.835 08:49:39 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:22.835 08:49:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:22.835 08:49:39 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:22.835 [ 0]:0x1 00:15:22.835 08:49:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:22.835 08:49:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:22.835 08:49:39 -- target/ns_masking.sh@40 -- # nguid=e44e991149a0427483579459aad5db46 00:15:22.835 08:49:39 -- target/ns_masking.sh@41 -- # [[ e44e991149a0427483579459aad5db46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.835 08:49:39 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:22.835 08:49:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:22.835 08:49:39 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:22.835 [ 1]:0x2 00:15:22.835 08:49:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:22.835 08:49:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:22.835 08:49:39 -- target/ns_masking.sh@40 -- # nguid=138bc63f4bc343d3ab786ab2b0f16bfb 00:15:22.835 08:49:39 -- target/ns_masking.sh@41 -- # [[ 138bc63f4bc343d3ab786ab2b0f16bfb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:22.835 08:49:39 -- target/ns_masking.sh@69 -- # disconnect 00:15:22.835 08:49:39 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:22.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.835 08:49:40 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.094 08:49:40 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:23.354 08:49:40 -- target/ns_masking.sh@77 -- # connect 1 00:15:23.354 08:49:40 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 692d1c14-c634-4dcc-897b-16765f6ed900 -a 10.0.0.2 -s 4420 -i 4 00:15:23.354 08:49:40 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:23.354 08:49:40 -- common/autotest_common.sh@1184 -- # local i=0 00:15:23.354 08:49:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.354 08:49:40 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:15:23.354 08:49:40 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:15:23.354 08:49:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:25.893 08:49:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:25.893 08:49:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:25.893 08:49:42 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.893 08:49:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:25.893 08:49:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.893 08:49:42 -- common/autotest_common.sh@1194 -- # return 0 00:15:25.893 08:49:42 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:25.893 08:49:42 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:25.893 08:49:42 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:25.893 08:49:42 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:25.893 08:49:42 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:25.893 08:49:42 -- common/autotest_common.sh@638 -- # local es=0 00:15:25.893 08:49:42 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:25.893 08:49:42 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:25.893 08:49:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:25.893 08:49:42 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:25.893 08:49:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:25.893 08:49:42 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:25.893 08:49:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:25.893 08:49:42 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:25.893 08:49:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:25.893 08:49:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:25.893 08:49:42 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:25.893 08:49:42 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.893 08:49:42 -- common/autotest_common.sh@641 -- # es=1 00:15:25.893 08:49:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:25.893 08:49:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:25.893 08:49:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:25.893 08:49:42 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:25.893 08:49:42 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:25.893 08:49:42 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:25.893 [ 0]:0x2 00:15:25.893 08:49:42 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:25.893 08:49:42 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:25.893 08:49:42 -- target/ns_masking.sh@40 -- # nguid=138bc63f4bc343d3ab786ab2b0f16bfb 00:15:25.893 08:49:42 -- target/ns_masking.sh@41 -- # [[ 138bc63f4bc343d3ab786ab2b0f16bfb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.893 08:49:42 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:25.893 08:49:43 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:25.893 08:49:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:25.893 08:49:43 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:25.893 [ 0]:0x1 00:15:25.893 08:49:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:25.893 08:49:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:25.893 08:49:43 -- target/ns_masking.sh@40 -- # nguid=e44e991149a0427483579459aad5db46 00:15:25.893 08:49:43 -- target/ns_masking.sh@41 -- # [[ e44e991149a0427483579459aad5db46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.893 08:49:43 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:25.893 08:49:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:25.893 08:49:43 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:25.893 [ 1]:0x2 00:15:25.893 08:49:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:25.893 08:49:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:26.153 08:49:43 -- target/ns_masking.sh@40 -- # nguid=138bc63f4bc343d3ab786ab2b0f16bfb 00:15:26.153 08:49:43 -- target/ns_masking.sh@41 -- # [[ 138bc63f4bc343d3ab786ab2b0f16bfb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.153 08:49:43 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:26.153 08:49:43 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:26.153 08:49:43 -- common/autotest_common.sh@638 -- # local es=0 00:15:26.153 08:49:43 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:26.153 08:49:43 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:26.153 08:49:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:26.153 08:49:43 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:26.153 08:49:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:26.153 08:49:43 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:26.153 08:49:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:26.153 08:49:43 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:26.153 08:49:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:26.153 08:49:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:26.153 08:49:43 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:26.153 08:49:43 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.153 08:49:43 -- common/autotest_common.sh@641 -- # es=1 00:15:26.153 08:49:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:26.153 08:49:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:26.153 08:49:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:26.153 08:49:43 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:26.153 08:49:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:26.153 08:49:43 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:26.153 [ 0]:0x2 00:15:26.153 08:49:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:26.153 08:49:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:26.412 08:49:43 -- target/ns_masking.sh@40 -- # nguid=138bc63f4bc343d3ab786ab2b0f16bfb 00:15:26.412 08:49:43 -- target/ns_masking.sh@41 -- # [[ 138bc63f4bc343d3ab786ab2b0f16bfb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:26.412 08:49:43 -- target/ns_masking.sh@91 -- # disconnect 00:15:26.412 08:49:43 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:26.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.412 08:49:43 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:26.671 08:49:43 -- target/ns_masking.sh@95 -- # connect 2 00:15:26.671 08:49:43 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 692d1c14-c634-4dcc-897b-16765f6ed900 -a 10.0.0.2 -s 4420 -i 4 00:15:26.671 08:49:43 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:26.671 08:49:43 -- common/autotest_common.sh@1184 -- # local i=0 00:15:26.671 08:49:43 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.671 08:49:43 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:26.671 08:49:43 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:26.671 08:49:43 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:28.578 08:49:45 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:28.578 08:49:45 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:28.578 08:49:45 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:28.578 08:49:45 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:28.578 08:49:45 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:28.578 08:49:45 -- common/autotest_common.sh@1194 -- # return 0 00:15:28.578 08:49:45 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:28.578 08:49:45 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:28.837 08:49:45 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:28.837 08:49:45 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:28.837 08:49:45 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:28.837 08:49:45 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:28.837 08:49:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:28.837 [ 0]:0x1 00:15:28.837 08:49:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:28.837 08:49:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:28.837 08:49:45 -- target/ns_masking.sh@40 -- # nguid=e44e991149a0427483579459aad5db46 00:15:28.837 08:49:45 -- target/ns_masking.sh@41 -- # [[ e44e991149a0427483579459aad5db46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.837 08:49:45 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:28.837 08:49:45 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:28.837 08:49:45 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:28.837 [ 1]:0x2 00:15:28.837 08:49:45 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:28.837 08:49:45 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:28.837 08:49:45 -- target/ns_masking.sh@40 -- # nguid=138bc63f4bc343d3ab786ab2b0f16bfb 00:15:28.837 08:49:45 -- target/ns_masking.sh@41 -- # [[ 138bc63f4bc343d3ab786ab2b0f16bfb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.837 08:49:45 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:29.097 08:49:46 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:29.097 08:49:46 -- common/autotest_common.sh@638 -- # local es=0 00:15:29.097 08:49:46 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:29.097 08:49:46 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:29.097 08:49:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:29.097 08:49:46 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:29.097 08:49:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:29.097 08:49:46 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:29.097 08:49:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:29.097 08:49:46 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:29.097 08:49:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:29.097 08:49:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:29.097 08:49:46 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:29.097 08:49:46 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.097 08:49:46 -- common/autotest_common.sh@641 -- # es=1 00:15:29.097 08:49:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:29.097 08:49:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:29.097 08:49:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:29.097 08:49:46 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:29.097 08:49:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:29.097 08:49:46 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:29.097 [ 0]:0x2 00:15:29.097 08:49:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:29.097 08:49:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:29.097 08:49:46 -- target/ns_masking.sh@40 -- # nguid=138bc63f4bc343d3ab786ab2b0f16bfb 00:15:29.097 08:49:46 -- target/ns_masking.sh@41 -- # [[ 138bc63f4bc343d3ab786ab2b0f16bfb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.097 08:49:46 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:29.097 08:49:46 -- common/autotest_common.sh@638 -- # local es=0 00:15:29.097 08:49:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:29.097 08:49:46 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.097 08:49:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:29.097 08:49:46 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.097 08:49:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:29.097 08:49:46 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.097 08:49:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:29.097 08:49:46 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.097 08:49:46 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:29.097 08:49:46 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:29.357 [2024-04-26 08:49:46.465837] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:29.357 request: 00:15:29.357 { 00:15:29.357 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.357 "nsid": 2, 00:15:29.357 "host": "nqn.2016-06.io.spdk:host1", 00:15:29.357 "method": "nvmf_ns_remove_host", 00:15:29.357 "req_id": 1 00:15:29.357 } 00:15:29.357 Got JSON-RPC error response 00:15:29.357 response: 00:15:29.357 { 00:15:29.357 "code": -32602, 00:15:29.357 "message": "Invalid parameters" 00:15:29.357 } 00:15:29.357 08:49:46 -- common/autotest_common.sh@641 -- # es=1 00:15:29.357 08:49:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:29.357 08:49:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:29.357 08:49:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:29.357 08:49:46 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:29.357 08:49:46 -- common/autotest_common.sh@638 -- # local es=0 00:15:29.357 08:49:46 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:29.357 08:49:46 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:29.357 08:49:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:29.357 08:49:46 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:29.357 08:49:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:29.357 08:49:46 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:29.357 08:49:46 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:29.357 08:49:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:29.357 08:49:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:29.357 08:49:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:29.357 08:49:46 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:29.357 08:49:46 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.357 08:49:46 -- common/autotest_common.sh@641 -- # es=1 00:15:29.357 08:49:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:29.357 08:49:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:29.357 08:49:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:29.357 08:49:46 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:29.357 08:49:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:29.357 08:49:46 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:29.617 [ 0]:0x2 00:15:29.617 08:49:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:29.617 08:49:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:29.617 08:49:46 -- target/ns_masking.sh@40 -- # nguid=138bc63f4bc343d3ab786ab2b0f16bfb 00:15:29.617 08:49:46 -- target/ns_masking.sh@41 -- # [[ 138bc63f4bc343d3ab786ab2b0f16bfb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:29.617 08:49:46 -- target/ns_masking.sh@108 -- # disconnect 00:15:29.617 08:49:46 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.617 08:49:46 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.877 08:49:46 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:29.877 08:49:46 -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:29.877 08:49:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:29.877 08:49:46 -- nvmf/common.sh@117 -- # sync 00:15:29.877 08:49:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.877 08:49:46 -- nvmf/common.sh@120 -- # set +e 00:15:29.877 08:49:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.877 08:49:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.877 rmmod nvme_tcp 00:15:29.877 rmmod nvme_fabrics 00:15:29.877 rmmod nvme_keyring 00:15:29.877 08:49:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.877 08:49:47 -- nvmf/common.sh@124 -- # set -e 00:15:29.877 08:49:47 -- nvmf/common.sh@125 -- # return 0 00:15:29.877 08:49:47 -- nvmf/common.sh@478 -- # '[' -n 2013240 ']' 00:15:29.877 08:49:47 -- nvmf/common.sh@479 -- # killprocess 2013240 00:15:29.877 08:49:47 -- common/autotest_common.sh@936 -- # '[' -z 2013240 ']' 00:15:29.877 08:49:47 -- common/autotest_common.sh@940 -- # kill -0 2013240 00:15:29.877 08:49:47 -- common/autotest_common.sh@941 -- # uname 00:15:29.877 08:49:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.877 08:49:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2013240 00:15:29.877 08:49:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:29.877 08:49:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:29.877 08:49:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2013240' 00:15:29.877 killing process with pid 2013240 00:15:29.877 08:49:47 -- common/autotest_common.sh@955 -- # kill 2013240 00:15:29.877 08:49:47 -- common/autotest_common.sh@960 -- # wait 2013240 00:15:30.137 08:49:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:30.137 08:49:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:30.137 08:49:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:30.137 08:49:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.137 08:49:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.137 08:49:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.137 08:49:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.137 08:49:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.675 08:49:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:32.675 00:15:32.675 real 0m21.138s 00:15:32.675 user 0m51.013s 00:15:32.675 sys 0m7.756s 00:15:32.675 08:49:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:32.675 08:49:49 -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 ************************************ 00:15:32.675 END TEST nvmf_ns_masking 00:15:32.675 ************************************ 00:15:32.675 08:49:49 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:32.675 08:49:49 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:32.675 08:49:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:32.675 08:49:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:32.675 08:49:49 -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 ************************************ 00:15:32.675 START TEST nvmf_nvme_cli 00:15:32.675 ************************************ 00:15:32.675 08:49:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:32.675 * Looking for test storage... 00:15:32.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.675 08:49:49 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.675 08:49:49 -- nvmf/common.sh@7 -- # uname -s 00:15:32.675 08:49:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.675 08:49:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.675 08:49:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.675 08:49:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.675 08:49:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.675 08:49:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.675 08:49:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.675 08:49:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.676 08:49:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.676 08:49:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.676 08:49:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:32.676 08:49:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:32.676 08:49:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.676 08:49:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.676 08:49:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.676 08:49:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.676 08:49:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.676 08:49:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.676 08:49:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.676 08:49:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.676 08:49:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.676 08:49:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.676 08:49:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.676 08:49:49 -- paths/export.sh@5 -- # export PATH 00:15:32.676 08:49:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.676 08:49:49 -- nvmf/common.sh@47 -- # : 0 00:15:32.676 08:49:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:32.676 08:49:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:32.676 08:49:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.676 08:49:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.676 08:49:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.676 08:49:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:32.676 08:49:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:32.676 08:49:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:32.676 08:49:49 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:32.676 08:49:49 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:32.676 08:49:49 -- target/nvme_cli.sh@14 -- # devs=() 00:15:32.676 08:49:49 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:32.676 08:49:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:32.676 08:49:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.676 08:49:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:32.676 08:49:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:32.676 08:49:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:32.676 08:49:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.676 08:49:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.676 08:49:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.676 08:49:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:32.676 08:49:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:32.676 08:49:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:32.676 08:49:49 -- common/autotest_common.sh@10 -- # set +x 00:15:39.249 08:49:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:39.249 08:49:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:39.249 08:49:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:39.249 08:49:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:39.249 08:49:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:39.249 08:49:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:39.249 08:49:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:39.249 08:49:56 -- nvmf/common.sh@295 -- # net_devs=() 00:15:39.249 08:49:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:39.249 08:49:56 -- nvmf/common.sh@296 -- # e810=() 00:15:39.249 08:49:56 -- nvmf/common.sh@296 -- # local -ga e810 00:15:39.249 08:49:56 -- nvmf/common.sh@297 -- # x722=() 00:15:39.249 08:49:56 -- nvmf/common.sh@297 -- # local -ga x722 00:15:39.249 08:49:56 -- nvmf/common.sh@298 -- # mlx=() 00:15:39.249 08:49:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:39.249 08:49:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.249 08:49:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.249 08:49:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.249 08:49:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.249 08:49:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.249 08:49:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.249 08:49:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.249 08:49:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.249 08:49:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.249 08:49:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.249 08:49:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.249 08:49:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:39.249 08:49:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:39.249 08:49:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:39.249 08:49:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.249 08:49:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:39.249 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:39.249 08:49:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.249 08:49:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:39.249 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:39.249 08:49:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:39.249 08:49:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.249 08:49:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.249 08:49:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:39.249 08:49:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.249 08:49:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:39.249 Found net devices under 0000:af:00.0: cvl_0_0 00:15:39.249 08:49:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.249 08:49:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.249 08:49:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.249 08:49:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:39.249 08:49:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.249 08:49:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:39.249 Found net devices under 0000:af:00.1: cvl_0_1 00:15:39.249 08:49:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.249 08:49:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:39.249 08:49:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:39.249 08:49:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:39.249 08:49:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:39.249 08:49:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.249 08:49:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.249 08:49:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.249 08:49:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:39.249 08:49:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.249 08:49:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.249 08:49:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:39.249 08:49:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.249 08:49:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.249 08:49:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:39.249 08:49:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:39.249 08:49:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.249 08:49:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.249 08:49:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.249 08:49:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.249 08:49:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:39.249 08:49:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.249 08:49:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.249 08:49:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.509 08:49:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:39.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:15:39.509 00:15:39.509 --- 10.0.0.2 ping statistics --- 00:15:39.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.509 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:15:39.509 08:49:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:15:39.509 00:15:39.509 --- 10.0.0.1 ping statistics --- 00:15:39.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.509 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:15:39.509 08:49:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.509 08:49:56 -- nvmf/common.sh@411 -- # return 0 00:15:39.509 08:49:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:39.509 08:49:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.509 08:49:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:39.509 08:49:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:39.509 08:49:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.509 08:49:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:39.509 08:49:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:39.509 08:49:56 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:39.509 08:49:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:39.509 08:49:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:39.509 08:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:39.509 08:49:56 -- nvmf/common.sh@470 -- # nvmfpid=2019103 00:15:39.509 08:49:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:39.510 08:49:56 -- nvmf/common.sh@471 -- # waitforlisten 2019103 00:15:39.510 08:49:56 -- common/autotest_common.sh@817 -- # '[' -z 2019103 ']' 00:15:39.510 08:49:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.510 08:49:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:39.510 08:49:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.510 08:49:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:39.510 08:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:39.510 [2024-04-26 08:49:56.610858] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:15:39.510 [2024-04-26 08:49:56.610906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.510 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.510 [2024-04-26 08:49:56.687437] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:39.770 [2024-04-26 08:49:56.759999] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.770 [2024-04-26 08:49:56.760035] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.770 [2024-04-26 08:49:56.760044] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.770 [2024-04-26 08:49:56.760053] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.770 [2024-04-26 08:49:56.760061] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.770 [2024-04-26 08:49:56.760101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.770 [2024-04-26 08:49:56.760196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.770 [2024-04-26 08:49:56.760285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:39.770 [2024-04-26 08:49:56.760287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.339 08:49:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:40.339 08:49:57 -- common/autotest_common.sh@850 -- # return 0 00:15:40.339 08:49:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:40.339 08:49:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:40.339 08:49:57 -- common/autotest_common.sh@10 -- # set +x 00:15:40.339 08:49:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.339 08:49:57 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.339 08:49:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.339 08:49:57 -- common/autotest_common.sh@10 -- # set +x 00:15:40.339 [2024-04-26 08:49:57.472311] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.339 08:49:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.339 08:49:57 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:40.339 08:49:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.339 08:49:57 -- common/autotest_common.sh@10 -- # set +x 00:15:40.339 Malloc0 00:15:40.339 08:49:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.339 08:49:57 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:40.339 08:49:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.339 08:49:57 -- common/autotest_common.sh@10 -- # set +x 00:15:40.339 Malloc1 00:15:40.339 08:49:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.339 08:49:57 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:40.339 08:49:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.339 08:49:57 -- common/autotest_common.sh@10 -- # set +x 00:15:40.339 08:49:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.339 08:49:57 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:40.339 08:49:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.339 08:49:57 -- common/autotest_common.sh@10 -- # set +x 00:15:40.339 08:49:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.339 08:49:57 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:40.339 08:49:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.339 08:49:57 -- common/autotest_common.sh@10 -- # set +x 00:15:40.339 08:49:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.339 08:49:57 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.339 08:49:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.339 08:49:57 -- common/autotest_common.sh@10 -- # set +x 00:15:40.339 [2024-04-26 08:49:57.556240] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.339 08:49:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.339 08:49:57 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:40.339 08:49:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:40.339 08:49:57 -- common/autotest_common.sh@10 -- # set +x 00:15:40.339 08:49:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:40.339 08:49:57 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:15:40.599 00:15:40.599 Discovery Log Number of Records 2, Generation counter 2 00:15:40.599 =====Discovery Log Entry 0====== 00:15:40.599 trtype: tcp 00:15:40.599 adrfam: ipv4 00:15:40.599 subtype: current discovery subsystem 00:15:40.599 treq: not required 00:15:40.599 portid: 0 00:15:40.599 trsvcid: 4420 00:15:40.599 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:40.599 traddr: 10.0.0.2 00:15:40.599 eflags: explicit discovery connections, duplicate discovery information 00:15:40.599 sectype: none 00:15:40.599 =====Discovery Log Entry 1====== 00:15:40.599 trtype: tcp 00:15:40.599 adrfam: ipv4 00:15:40.599 subtype: nvme subsystem 00:15:40.599 treq: not required 00:15:40.599 portid: 0 00:15:40.599 trsvcid: 4420 00:15:40.599 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:40.599 traddr: 10.0.0.2 00:15:40.599 eflags: none 00:15:40.599 sectype: none 00:15:40.599 08:49:57 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:40.599 08:49:57 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:40.599 08:49:57 -- nvmf/common.sh@511 -- # local dev _ 00:15:40.599 08:49:57 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:40.599 08:49:57 -- nvmf/common.sh@510 -- # nvme list 00:15:40.599 08:49:57 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:40.599 08:49:57 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:40.599 08:49:57 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:40.599 08:49:57 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:40.599 08:49:57 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:40.599 08:49:57 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:42.035 08:49:59 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:42.035 08:49:59 -- common/autotest_common.sh@1184 -- # local i=0 00:15:42.035 08:49:59 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.035 08:49:59 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:42.035 08:49:59 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:42.035 08:49:59 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:43.943 08:50:01 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:43.943 08:50:01 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:43.943 08:50:01 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.943 08:50:01 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:43.943 08:50:01 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.943 08:50:01 -- common/autotest_common.sh@1194 -- # return 0 00:15:43.943 08:50:01 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:43.943 08:50:01 -- nvmf/common.sh@511 -- # local dev _ 00:15:43.943 08:50:01 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:43.943 08:50:01 -- nvmf/common.sh@510 -- # nvme list 00:15:43.943 08:50:01 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:43.943 08:50:01 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:43.943 08:50:01 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:43.943 08:50:01 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:43.943 08:50:01 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:43.943 08:50:01 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:43.943 08:50:01 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:43.943 08:50:01 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:43.943 08:50:01 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:43.943 08:50:01 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:43.943 08:50:01 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:43.943 /dev/nvme0n1 ]] 00:15:43.943 08:50:01 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:43.943 08:50:01 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:43.943 08:50:01 -- nvmf/common.sh@511 -- # local dev _ 00:15:43.943 08:50:01 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:43.943 08:50:01 -- nvmf/common.sh@510 -- # nvme list 00:15:43.943 08:50:01 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:15:43.943 08:50:01 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:43.943 08:50:01 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:15:43.943 08:50:01 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:43.943 08:50:01 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:43.943 08:50:01 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:15:43.943 08:50:01 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:43.943 08:50:01 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:43.943 08:50:01 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:15:43.943 08:50:01 -- nvmf/common.sh@513 -- # read -r dev _ 00:15:43.943 08:50:01 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:43.943 08:50:01 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:43.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.943 08:50:01 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:43.943 08:50:01 -- common/autotest_common.sh@1205 -- # local i=0 00:15:43.943 08:50:01 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:43.943 08:50:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.202 08:50:01 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:44.202 08:50:01 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.202 08:50:01 -- common/autotest_common.sh@1217 -- # return 0 00:15:44.202 08:50:01 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:44.202 08:50:01 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.202 08:50:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.202 08:50:01 -- common/autotest_common.sh@10 -- # set +x 00:15:44.203 08:50:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.203 08:50:01 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:44.203 08:50:01 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:44.203 08:50:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:44.203 08:50:01 -- nvmf/common.sh@117 -- # sync 00:15:44.203 08:50:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.203 08:50:01 -- nvmf/common.sh@120 -- # set +e 00:15:44.203 08:50:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.203 08:50:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.203 rmmod nvme_tcp 00:15:44.203 rmmod nvme_fabrics 00:15:44.203 rmmod nvme_keyring 00:15:44.203 08:50:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.203 08:50:01 -- nvmf/common.sh@124 -- # set -e 00:15:44.203 08:50:01 -- nvmf/common.sh@125 -- # return 0 00:15:44.203 08:50:01 -- nvmf/common.sh@478 -- # '[' -n 2019103 ']' 00:15:44.203 08:50:01 -- nvmf/common.sh@479 -- # killprocess 2019103 00:15:44.203 08:50:01 -- common/autotest_common.sh@936 -- # '[' -z 2019103 ']' 00:15:44.203 08:50:01 -- common/autotest_common.sh@940 -- # kill -0 2019103 00:15:44.203 08:50:01 -- common/autotest_common.sh@941 -- # uname 00:15:44.203 08:50:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.203 08:50:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2019103 00:15:44.203 08:50:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:44.203 08:50:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:44.203 08:50:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2019103' 00:15:44.203 killing process with pid 2019103 00:15:44.203 08:50:01 -- common/autotest_common.sh@955 -- # kill 2019103 00:15:44.203 08:50:01 -- common/autotest_common.sh@960 -- # wait 2019103 00:15:44.462 08:50:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:44.462 08:50:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:44.462 08:50:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:44.462 08:50:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.462 08:50:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.462 08:50:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.462 08:50:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.462 08:50:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.000 08:50:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:47.000 00:15:47.000 real 0m14.047s 00:15:47.000 user 0m20.955s 00:15:47.000 sys 0m5.960s 00:15:47.000 08:50:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:47.000 08:50:03 -- common/autotest_common.sh@10 -- # set +x 00:15:47.000 ************************************ 00:15:47.000 END TEST nvmf_nvme_cli 00:15:47.000 ************************************ 00:15:47.000 08:50:03 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:47.000 08:50:03 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:47.000 08:50:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:47.000 08:50:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:47.000 08:50:03 -- common/autotest_common.sh@10 -- # set +x 00:15:47.000 ************************************ 00:15:47.000 START TEST nvmf_vfio_user 00:15:47.000 ************************************ 00:15:47.000 08:50:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:47.000 * Looking for test storage... 00:15:47.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.000 08:50:04 -- nvmf/common.sh@7 -- # uname -s 00:15:47.000 08:50:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.000 08:50:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.000 08:50:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.000 08:50:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.000 08:50:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.000 08:50:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.000 08:50:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.000 08:50:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.000 08:50:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.000 08:50:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.000 08:50:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:47.000 08:50:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:47.000 08:50:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.000 08:50:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.000 08:50:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.000 08:50:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.000 08:50:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.000 08:50:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.000 08:50:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.000 08:50:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.000 08:50:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.000 08:50:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.000 08:50:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.000 08:50:04 -- paths/export.sh@5 -- # export PATH 00:15:47.000 08:50:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.000 08:50:04 -- nvmf/common.sh@47 -- # : 0 00:15:47.000 08:50:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.000 08:50:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.000 08:50:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.000 08:50:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.000 08:50:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.000 08:50:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.000 08:50:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.000 08:50:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2020447 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2020447' 00:15:47.000 Process pid: 2020447 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:47.000 08:50:04 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2020447 00:15:47.001 08:50:04 -- common/autotest_common.sh@817 -- # '[' -z 2020447 ']' 00:15:47.001 08:50:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.001 08:50:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:47.001 08:50:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.001 08:50:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:47.001 08:50:04 -- common/autotest_common.sh@10 -- # set +x 00:15:47.001 [2024-04-26 08:50:04.098776] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:15:47.001 [2024-04-26 08:50:04.098829] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.001 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.001 [2024-04-26 08:50:04.174110] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.259 [2024-04-26 08:50:04.247429] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.259 [2024-04-26 08:50:04.247474] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.259 [2024-04-26 08:50:04.247485] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.259 [2024-04-26 08:50:04.247494] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.259 [2024-04-26 08:50:04.247501] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.259 [2024-04-26 08:50:04.247547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.259 [2024-04-26 08:50:04.247625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.259 [2024-04-26 08:50:04.247684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.259 [2024-04-26 08:50:04.247686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.828 08:50:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:47.828 08:50:04 -- common/autotest_common.sh@850 -- # return 0 00:15:47.828 08:50:04 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:48.765 08:50:05 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:49.025 08:50:06 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:49.025 08:50:06 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:49.025 08:50:06 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:49.025 08:50:06 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:49.025 08:50:06 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:49.285 Malloc1 00:15:49.285 08:50:06 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:49.285 08:50:06 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:49.544 08:50:06 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:49.803 08:50:06 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:49.803 08:50:06 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:49.803 08:50:06 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:49.803 Malloc2 00:15:50.062 08:50:07 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:50.062 08:50:07 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:50.322 08:50:07 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:50.583 08:50:07 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:50.583 08:50:07 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:50.583 08:50:07 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:50.583 08:50:07 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:50.583 08:50:07 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:50.583 08:50:07 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:50.583 [2024-04-26 08:50:07.634325] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:15:50.583 [2024-04-26 08:50:07.634364] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021176 ] 00:15:50.583 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.583 [2024-04-26 08:50:07.664818] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:50.583 [2024-04-26 08:50:07.672743] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:50.583 [2024-04-26 08:50:07.672763] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f726fccb000 00:15:50.583 [2024-04-26 08:50:07.673746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.583 [2024-04-26 08:50:07.674743] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.583 [2024-04-26 08:50:07.675752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.583 [2024-04-26 08:50:07.676758] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:50.583 [2024-04-26 08:50:07.677763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:50.583 [2024-04-26 08:50:07.678771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.583 [2024-04-26 08:50:07.679775] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:50.583 [2024-04-26 08:50:07.680780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:50.583 [2024-04-26 08:50:07.681789] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:50.583 [2024-04-26 08:50:07.681803] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f726fcc0000 00:15:50.583 [2024-04-26 08:50:07.682700] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:50.583 [2024-04-26 08:50:07.691997] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:50.583 [2024-04-26 08:50:07.692024] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:50.583 [2024-04-26 08:50:07.696887] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:50.583 [2024-04-26 08:50:07.696927] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:50.583 [2024-04-26 08:50:07.696998] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:50.583 [2024-04-26 08:50:07.697021] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:50.583 [2024-04-26 08:50:07.697028] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:50.583 [2024-04-26 08:50:07.699458] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:50.584 [2024-04-26 08:50:07.699469] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:50.584 [2024-04-26 08:50:07.699477] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:50.584 [2024-04-26 08:50:07.699896] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:50.584 [2024-04-26 08:50:07.699905] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:50.584 [2024-04-26 08:50:07.699914] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:50.584 [2024-04-26 08:50:07.700903] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:50.584 [2024-04-26 08:50:07.700913] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:50.584 [2024-04-26 08:50:07.701910] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:50.584 [2024-04-26 08:50:07.701919] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:50.584 [2024-04-26 08:50:07.701925] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:50.584 [2024-04-26 08:50:07.701933] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:50.584 [2024-04-26 08:50:07.702040] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:50.584 [2024-04-26 08:50:07.702046] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:50.584 [2024-04-26 08:50:07.702053] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:50.584 [2024-04-26 08:50:07.702917] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:50.584 [2024-04-26 08:50:07.703921] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:50.584 [2024-04-26 08:50:07.704924] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:50.584 [2024-04-26 08:50:07.705920] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:50.584 [2024-04-26 08:50:07.706001] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:50.584 [2024-04-26 08:50:07.706933] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:50.584 [2024-04-26 08:50:07.706942] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:50.584 [2024-04-26 08:50:07.706949] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.706968] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:50.584 [2024-04-26 08:50:07.706982] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707001] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:50.584 [2024-04-26 08:50:07.707007] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.584 [2024-04-26 08:50:07.707023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.584 [2024-04-26 08:50:07.707064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:50.584 [2024-04-26 08:50:07.707075] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:50.584 [2024-04-26 08:50:07.707081] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:50.584 [2024-04-26 08:50:07.707087] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:50.584 [2024-04-26 08:50:07.707093] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:50.584 [2024-04-26 08:50:07.707100] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:50.584 [2024-04-26 08:50:07.707105] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:50.584 [2024-04-26 08:50:07.707111] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707120] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:50.584 [2024-04-26 08:50:07.707143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:50.584 [2024-04-26 08:50:07.707157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.584 [2024-04-26 08:50:07.707167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.584 [2024-04-26 08:50:07.707176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.584 [2024-04-26 08:50:07.707185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.584 [2024-04-26 08:50:07.707193] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707205] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707215] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:50.584 [2024-04-26 08:50:07.707223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:50.584 [2024-04-26 08:50:07.707230] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:50.584 [2024-04-26 08:50:07.707236] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707247] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707255] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:50.584 [2024-04-26 08:50:07.707279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:50.584 [2024-04-26 08:50:07.707319] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707328] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707336] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:50.584 [2024-04-26 08:50:07.707342] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:50.584 [2024-04-26 08:50:07.707349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:50.584 [2024-04-26 08:50:07.707359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:50.584 [2024-04-26 08:50:07.707370] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:50.584 [2024-04-26 08:50:07.707381] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707391] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707399] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:50.584 [2024-04-26 08:50:07.707404] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.584 [2024-04-26 08:50:07.707411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.584 [2024-04-26 08:50:07.707428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:50.584 [2024-04-26 08:50:07.707442] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707458] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:50.584 [2024-04-26 08:50:07.707468] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:50.584 [2024-04-26 08:50:07.707474] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.584 [2024-04-26 08:50:07.707481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.585 [2024-04-26 08:50:07.707491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:50.585 [2024-04-26 08:50:07.707501] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:50.585 [2024-04-26 08:50:07.707509] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:50.585 [2024-04-26 08:50:07.707518] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:50.585 [2024-04-26 08:50:07.707525] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:50.585 [2024-04-26 08:50:07.707531] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:50.585 [2024-04-26 08:50:07.707538] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:50.585 [2024-04-26 08:50:07.707544] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:50.585 [2024-04-26 08:50:07.707550] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:50.585 [2024-04-26 08:50:07.707568] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:50.585 [2024-04-26 08:50:07.707579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:50.585 [2024-04-26 08:50:07.707592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:50.585 [2024-04-26 08:50:07.707603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:50.585 [2024-04-26 08:50:07.707615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:50.585 [2024-04-26 08:50:07.707623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:50.585 [2024-04-26 08:50:07.707636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:50.585 [2024-04-26 08:50:07.707649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:50.585 [2024-04-26 08:50:07.707661] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:50.585 [2024-04-26 08:50:07.707667] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:50.585 [2024-04-26 08:50:07.707672] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:50.585 [2024-04-26 08:50:07.707676] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:50.585 [2024-04-26 08:50:07.707683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:50.585 [2024-04-26 08:50:07.707691] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:50.585 [2024-04-26 08:50:07.707698] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:50.585 [2024-04-26 08:50:07.707705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:50.585 [2024-04-26 08:50:07.707713] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:50.585 [2024-04-26 08:50:07.707719] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:50.585 [2024-04-26 08:50:07.707725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:50.585 [2024-04-26 08:50:07.707734] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:50.585 [2024-04-26 08:50:07.707739] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:50.585 [2024-04-26 08:50:07.707746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:50.585 [2024-04-26 08:50:07.707754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:50.585 [2024-04-26 08:50:07.707769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:50.585 [2024-04-26 08:50:07.707780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:50.585 [2024-04-26 08:50:07.707788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:50.585 ===================================================== 00:15:50.585 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:50.585 ===================================================== 00:15:50.585 Controller Capabilities/Features 00:15:50.585 ================================ 00:15:50.585 Vendor ID: 4e58 00:15:50.585 Subsystem Vendor ID: 4e58 00:15:50.585 Serial Number: SPDK1 00:15:50.585 Model Number: SPDK bdev Controller 00:15:50.585 Firmware Version: 24.05 00:15:50.585 Recommended Arb Burst: 6 00:15:50.585 IEEE OUI Identifier: 8d 6b 50 00:15:50.585 Multi-path I/O 00:15:50.585 May have multiple subsystem ports: Yes 00:15:50.585 May have multiple controllers: Yes 00:15:50.585 Associated with SR-IOV VF: No 00:15:50.585 Max Data Transfer Size: 131072 00:15:50.585 Max Number of Namespaces: 32 00:15:50.585 Max Number of I/O Queues: 127 00:15:50.585 NVMe Specification Version (VS): 1.3 00:15:50.585 NVMe Specification Version (Identify): 1.3 00:15:50.585 Maximum Queue Entries: 256 00:15:50.585 Contiguous Queues Required: Yes 00:15:50.585 Arbitration Mechanisms Supported 00:15:50.585 Weighted Round Robin: Not Supported 00:15:50.585 Vendor Specific: Not Supported 00:15:50.585 Reset Timeout: 15000 ms 00:15:50.585 Doorbell Stride: 4 bytes 00:15:50.585 NVM Subsystem Reset: Not Supported 00:15:50.585 Command Sets Supported 00:15:50.585 NVM Command Set: Supported 00:15:50.585 Boot Partition: Not Supported 00:15:50.585 Memory Page Size Minimum: 4096 bytes 00:15:50.585 Memory Page Size Maximum: 4096 bytes 00:15:50.585 Persistent Memory Region: Not Supported 00:15:50.585 Optional Asynchronous Events Supported 00:15:50.585 Namespace Attribute Notices: Supported 00:15:50.585 Firmware Activation Notices: Not Supported 00:15:50.585 ANA Change Notices: Not Supported 00:15:50.585 PLE Aggregate Log Change Notices: Not Supported 00:15:50.585 LBA Status Info Alert Notices: Not Supported 00:15:50.585 EGE Aggregate Log Change Notices: Not Supported 00:15:50.585 Normal NVM Subsystem Shutdown event: Not Supported 00:15:50.585 Zone Descriptor Change Notices: Not Supported 00:15:50.585 Discovery Log Change Notices: Not Supported 00:15:50.585 Controller Attributes 00:15:50.585 128-bit Host Identifier: Supported 00:15:50.585 Non-Operational Permissive Mode: Not Supported 00:15:50.585 NVM Sets: Not Supported 00:15:50.585 Read Recovery Levels: Not Supported 00:15:50.585 Endurance Groups: Not Supported 00:15:50.585 Predictable Latency Mode: Not Supported 00:15:50.585 Traffic Based Keep ALive: Not Supported 00:15:50.585 Namespace Granularity: Not Supported 00:15:50.585 SQ Associations: Not Supported 00:15:50.585 UUID List: Not Supported 00:15:50.585 Multi-Domain Subsystem: Not Supported 00:15:50.585 Fixed Capacity Management: Not Supported 00:15:50.585 Variable Capacity Management: Not Supported 00:15:50.585 Delete Endurance Group: Not Supported 00:15:50.585 Delete NVM Set: Not Supported 00:15:50.585 Extended LBA Formats Supported: Not Supported 00:15:50.585 Flexible Data Placement Supported: Not Supported 00:15:50.585 00:15:50.585 Controller Memory Buffer Support 00:15:50.585 ================================ 00:15:50.585 Supported: No 00:15:50.585 00:15:50.585 Persistent Memory Region Support 00:15:50.585 ================================ 00:15:50.585 Supported: No 00:15:50.585 00:15:50.585 Admin Command Set Attributes 00:15:50.585 ============================ 00:15:50.585 Security Send/Receive: Not Supported 00:15:50.585 Format NVM: Not Supported 00:15:50.585 Firmware Activate/Download: Not Supported 00:15:50.585 Namespace Management: Not Supported 00:15:50.585 Device Self-Test: Not Supported 00:15:50.585 Directives: Not Supported 00:15:50.585 NVMe-MI: Not Supported 00:15:50.585 Virtualization Management: Not Supported 00:15:50.585 Doorbell Buffer Config: Not Supported 00:15:50.585 Get LBA Status Capability: Not Supported 00:15:50.585 Command & Feature Lockdown Capability: Not Supported 00:15:50.585 Abort Command Limit: 4 00:15:50.585 Async Event Request Limit: 4 00:15:50.586 Number of Firmware Slots: N/A 00:15:50.586 Firmware Slot 1 Read-Only: N/A 00:15:50.586 Firmware Activation Without Reset: N/A 00:15:50.586 Multiple Update Detection Support: N/A 00:15:50.586 Firmware Update Granularity: No Information Provided 00:15:50.586 Per-Namespace SMART Log: No 00:15:50.586 Asymmetric Namespace Access Log Page: Not Supported 00:15:50.586 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:50.586 Command Effects Log Page: Supported 00:15:50.586 Get Log Page Extended Data: Supported 00:15:50.586 Telemetry Log Pages: Not Supported 00:15:50.586 Persistent Event Log Pages: Not Supported 00:15:50.586 Supported Log Pages Log Page: May Support 00:15:50.586 Commands Supported & Effects Log Page: Not Supported 00:15:50.586 Feature Identifiers & Effects Log Page:May Support 00:15:50.586 NVMe-MI Commands & Effects Log Page: May Support 00:15:50.586 Data Area 4 for Telemetry Log: Not Supported 00:15:50.586 Error Log Page Entries Supported: 128 00:15:50.586 Keep Alive: Supported 00:15:50.586 Keep Alive Granularity: 10000 ms 00:15:50.586 00:15:50.586 NVM Command Set Attributes 00:15:50.586 ========================== 00:15:50.586 Submission Queue Entry Size 00:15:50.586 Max: 64 00:15:50.586 Min: 64 00:15:50.586 Completion Queue Entry Size 00:15:50.586 Max: 16 00:15:50.586 Min: 16 00:15:50.586 Number of Namespaces: 32 00:15:50.586 Compare Command: Supported 00:15:50.586 Write Uncorrectable Command: Not Supported 00:15:50.586 Dataset Management Command: Supported 00:15:50.586 Write Zeroes Command: Supported 00:15:50.586 Set Features Save Field: Not Supported 00:15:50.586 Reservations: Not Supported 00:15:50.586 Timestamp: Not Supported 00:15:50.586 Copy: Supported 00:15:50.586 Volatile Write Cache: Present 00:15:50.586 Atomic Write Unit (Normal): 1 00:15:50.586 Atomic Write Unit (PFail): 1 00:15:50.586 Atomic Compare & Write Unit: 1 00:15:50.586 Fused Compare & Write: Supported 00:15:50.586 Scatter-Gather List 00:15:50.586 SGL Command Set: Supported (Dword aligned) 00:15:50.586 SGL Keyed: Not Supported 00:15:50.586 SGL Bit Bucket Descriptor: Not Supported 00:15:50.586 SGL Metadata Pointer: Not Supported 00:15:50.586 Oversized SGL: Not Supported 00:15:50.586 SGL Metadata Address: Not Supported 00:15:50.586 SGL Offset: Not Supported 00:15:50.586 Transport SGL Data Block: Not Supported 00:15:50.586 Replay Protected Memory Block: Not Supported 00:15:50.586 00:15:50.586 Firmware Slot Information 00:15:50.586 ========================= 00:15:50.586 Active slot: 1 00:15:50.586 Slot 1 Firmware Revision: 24.05 00:15:50.586 00:15:50.586 00:15:50.586 Commands Supported and Effects 00:15:50.586 ============================== 00:15:50.586 Admin Commands 00:15:50.586 -------------- 00:15:50.586 Get Log Page (02h): Supported 00:15:50.586 Identify (06h): Supported 00:15:50.586 Abort (08h): Supported 00:15:50.586 Set Features (09h): Supported 00:15:50.586 Get Features (0Ah): Supported 00:15:50.586 Asynchronous Event Request (0Ch): Supported 00:15:50.586 Keep Alive (18h): Supported 00:15:50.586 I/O Commands 00:15:50.586 ------------ 00:15:50.586 Flush (00h): Supported LBA-Change 00:15:50.586 Write (01h): Supported LBA-Change 00:15:50.586 Read (02h): Supported 00:15:50.586 Compare (05h): Supported 00:15:50.586 Write Zeroes (08h): Supported LBA-Change 00:15:50.586 Dataset Management (09h): Supported LBA-Change 00:15:50.586 Copy (19h): Supported LBA-Change 00:15:50.586 Unknown (79h): Supported LBA-Change 00:15:50.586 Unknown (7Ah): Supported 00:15:50.586 00:15:50.586 Error Log 00:15:50.586 ========= 00:15:50.586 00:15:50.586 Arbitration 00:15:50.586 =========== 00:15:50.586 Arbitration Burst: 1 00:15:50.586 00:15:50.586 Power Management 00:15:50.586 ================ 00:15:50.586 Number of Power States: 1 00:15:50.586 Current Power State: Power State #0 00:15:50.586 Power State #0: 00:15:50.586 Max Power: 0.00 W 00:15:50.586 Non-Operational State: Operational 00:15:50.586 Entry Latency: Not Reported 00:15:50.586 Exit Latency: Not Reported 00:15:50.586 Relative Read Throughput: 0 00:15:50.586 Relative Read Latency: 0 00:15:50.586 Relative Write Throughput: 0 00:15:50.586 Relative Write Latency: 0 00:15:50.586 Idle Power: Not Reported 00:15:50.586 Active Power: Not Reported 00:15:50.586 Non-Operational Permissive Mode: Not Supported 00:15:50.586 00:15:50.586 Health Information 00:15:50.586 ================== 00:15:50.586 Critical Warnings: 00:15:50.586 Available Spare Space: OK 00:15:50.586 Temperature: OK 00:15:50.586 Device Reliability: OK 00:15:50.586 Read Only: No 00:15:50.586 Volatile Memory Backup: OK 00:15:50.586 Current Temperature: 0 Kelvin (-2[2024-04-26 08:50:07.707883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:50.586 [2024-04-26 08:50:07.707894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:50.586 [2024-04-26 08:50:07.707919] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:50.586 [2024-04-26 08:50:07.707930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.586 [2024-04-26 08:50:07.707938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.586 [2024-04-26 08:50:07.707945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.586 [2024-04-26 08:50:07.707953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.586 [2024-04-26 08:50:07.708942] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:50.586 [2024-04-26 08:50:07.708954] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:50.586 [2024-04-26 08:50:07.709943] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:50.586 [2024-04-26 08:50:07.709990] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:50.586 [2024-04-26 08:50:07.709997] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:50.586 [2024-04-26 08:50:07.710949] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:50.586 [2024-04-26 08:50:07.710961] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:50.586 [2024-04-26 08:50:07.711011] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:50.586 [2024-04-26 08:50:07.714462] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:50.586 73 Celsius) 00:15:50.586 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:50.586 Available Spare: 0% 00:15:50.586 Available Spare Threshold: 0% 00:15:50.586 Life Percentage Used: 0% 00:15:50.586 Data Units Read: 0 00:15:50.586 Data Units Written: 0 00:15:50.586 Host Read Commands: 0 00:15:50.586 Host Write Commands: 0 00:15:50.586 Controller Busy Time: 0 minutes 00:15:50.586 Power Cycles: 0 00:15:50.586 Power On Hours: 0 hours 00:15:50.586 Unsafe Shutdowns: 0 00:15:50.586 Unrecoverable Media Errors: 0 00:15:50.586 Lifetime Error Log Entries: 0 00:15:50.586 Warning Temperature Time: 0 minutes 00:15:50.586 Critical Temperature Time: 0 minutes 00:15:50.586 00:15:50.586 Number of Queues 00:15:50.586 ================ 00:15:50.586 Number of I/O Submission Queues: 127 00:15:50.586 Number of I/O Completion Queues: 127 00:15:50.586 00:15:50.586 Active Namespaces 00:15:50.586 ================= 00:15:50.586 Namespace ID:1 00:15:50.586 Error Recovery Timeout: Unlimited 00:15:50.586 Command Set Identifier: NVM (00h) 00:15:50.586 Deallocate: Supported 00:15:50.586 Deallocated/Unwritten Error: Not Supported 00:15:50.586 Deallocated Read Value: Unknown 00:15:50.586 Deallocate in Write Zeroes: Not Supported 00:15:50.586 Deallocated Guard Field: 0xFFFF 00:15:50.586 Flush: Supported 00:15:50.586 Reservation: Supported 00:15:50.586 Namespace Sharing Capabilities: Multiple Controllers 00:15:50.586 Size (in LBAs): 131072 (0GiB) 00:15:50.586 Capacity (in LBAs): 131072 (0GiB) 00:15:50.586 Utilization (in LBAs): 131072 (0GiB) 00:15:50.586 NGUID: 76404C9C451647DB9270CBEDFDE15D5A 00:15:50.586 UUID: 76404c9c-4516-47db-9270-cbedfde15d5a 00:15:50.586 Thin Provisioning: Not Supported 00:15:50.586 Per-NS Atomic Units: Yes 00:15:50.586 Atomic Boundary Size (Normal): 0 00:15:50.586 Atomic Boundary Size (PFail): 0 00:15:50.586 Atomic Boundary Offset: 0 00:15:50.586 Maximum Single Source Range Length: 65535 00:15:50.586 Maximum Copy Length: 65535 00:15:50.586 Maximum Source Range Count: 1 00:15:50.586 NGUID/EUI64 Never Reused: No 00:15:50.586 Namespace Write Protected: No 00:15:50.586 Number of LBA Formats: 1 00:15:50.586 Current LBA Format: LBA Format #00 00:15:50.586 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:50.586 00:15:50.587 08:50:07 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:50.587 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.846 [2024-04-26 08:50:07.921866] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.125 [2024-04-26 08:50:12.939356] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.125 Initializing NVMe Controllers 00:15:56.125 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:56.125 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:56.125 Initialization complete. Launching workers. 00:15:56.125 ======================================================== 00:15:56.125 Latency(us) 00:15:56.125 Device Information : IOPS MiB/s Average min max 00:15:56.125 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39921.64 155.94 3206.09 925.62 7685.91 00:15:56.125 ======================================================== 00:15:56.125 Total : 39921.64 155.94 3206.09 925.62 7685.91 00:15:56.125 00:15:56.125 08:50:12 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:56.125 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.125 [2024-04-26 08:50:13.153322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:01.398 [2024-04-26 08:50:18.193724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:01.398 Initializing NVMe Controllers 00:16:01.398 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:01.398 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:01.398 Initialization complete. Launching workers. 00:16:01.398 ======================================================== 00:16:01.398 Latency(us) 00:16:01.398 Device Information : IOPS MiB/s Average min max 00:16:01.398 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.60 62.71 7978.17 6981.30 8011.45 00:16:01.398 ======================================================== 00:16:01.398 Total : 16054.60 62.71 7978.17 6981.30 8011.45 00:16:01.398 00:16:01.398 08:50:18 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:01.398 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.398 [2024-04-26 08:50:18.419750] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:06.671 [2024-04-26 08:50:23.496774] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:06.671 Initializing NVMe Controllers 00:16:06.671 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:06.671 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:06.671 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:06.671 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:06.671 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:06.671 Initialization complete. Launching workers. 00:16:06.671 Starting thread on core 2 00:16:06.671 Starting thread on core 3 00:16:06.671 Starting thread on core 1 00:16:06.671 08:50:23 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:06.671 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.671 [2024-04-26 08:50:23.797789] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:09.963 [2024-04-26 08:50:26.856273] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:09.963 Initializing NVMe Controllers 00:16:09.963 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.963 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:09.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:09.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:09.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:09.963 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:09.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:09.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:09.963 Initialization complete. Launching workers. 00:16:09.963 Starting thread on core 1 with urgent priority queue 00:16:09.964 Starting thread on core 2 with urgent priority queue 00:16:09.964 Starting thread on core 3 with urgent priority queue 00:16:09.964 Starting thread on core 0 with urgent priority queue 00:16:09.964 SPDK bdev Controller (SPDK1 ) core 0: 7400.33 IO/s 13.51 secs/100000 ios 00:16:09.964 SPDK bdev Controller (SPDK1 ) core 1: 7896.33 IO/s 12.66 secs/100000 ios 00:16:09.964 SPDK bdev Controller (SPDK1 ) core 2: 9753.67 IO/s 10.25 secs/100000 ios 00:16:09.964 SPDK bdev Controller (SPDK1 ) core 3: 8882.00 IO/s 11.26 secs/100000 ios 00:16:09.964 ======================================================== 00:16:09.964 00:16:09.964 08:50:26 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:09.964 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.964 [2024-04-26 08:50:27.146869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:09.964 [2024-04-26 08:50:27.181197] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:10.223 Initializing NVMe Controllers 00:16:10.223 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:10.223 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:10.223 Namespace ID: 1 size: 0GB 00:16:10.223 Initialization complete. 00:16:10.223 INFO: using host memory buffer for IO 00:16:10.223 Hello world! 00:16:10.223 08:50:27 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:10.223 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.223 [2024-04-26 08:50:27.453805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:11.601 Initializing NVMe Controllers 00:16:11.601 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:11.601 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:11.601 Initialization complete. Launching workers. 00:16:11.601 submit (in ns) avg, min, max = 6247.3, 3025.6, 4000171.2 00:16:11.601 complete (in ns) avg, min, max = 18640.0, 1650.4, 6990180.8 00:16:11.601 00:16:11.601 Submit histogram 00:16:11.601 ================ 00:16:11.601 Range in us Cumulative Count 00:16:11.601 3.021 - 3.034: 0.0119% ( 2) 00:16:11.601 3.034 - 3.046: 0.0237% ( 2) 00:16:11.601 3.046 - 3.059: 0.0415% ( 3) 00:16:11.601 3.059 - 3.072: 0.1067% ( 11) 00:16:11.601 3.072 - 3.085: 0.3556% ( 42) 00:16:11.601 3.085 - 3.098: 0.8297% ( 80) 00:16:11.601 3.098 - 3.110: 1.5231% ( 117) 00:16:11.601 3.110 - 3.123: 2.4181% ( 151) 00:16:11.601 3.123 - 3.136: 3.9945% ( 266) 00:16:11.601 3.136 - 3.149: 6.4956% ( 422) 00:16:11.601 3.149 - 3.162: 9.9982% ( 591) 00:16:11.601 3.162 - 3.174: 14.3306% ( 731) 00:16:11.601 3.174 - 3.187: 18.9415% ( 778) 00:16:11.601 3.187 - 3.200: 25.0282% ( 1027) 00:16:11.601 3.200 - 3.213: 30.9311% ( 996) 00:16:11.601 3.213 - 3.226: 36.4784% ( 936) 00:16:11.601 3.226 - 3.238: 41.9546% ( 924) 00:16:11.601 3.238 - 3.251: 47.3123% ( 904) 00:16:11.601 3.251 - 3.264: 53.1026% ( 977) 00:16:11.601 3.264 - 3.277: 56.9312% ( 646) 00:16:11.601 3.277 - 3.302: 62.3363% ( 912) 00:16:11.601 3.302 - 3.328: 68.6481% ( 1065) 00:16:11.601 3.328 - 3.354: 74.4385% ( 977) 00:16:11.601 3.354 - 3.379: 81.0052% ( 1108) 00:16:11.601 3.379 - 3.405: 86.3688% ( 905) 00:16:11.601 3.405 - 3.430: 88.2831% ( 323) 00:16:11.601 3.430 - 3.456: 89.2432% ( 162) 00:16:11.601 3.456 - 3.482: 90.1618% ( 155) 00:16:11.601 3.482 - 3.507: 91.3234% ( 196) 00:16:11.601 3.507 - 3.533: 92.7517% ( 241) 00:16:11.601 3.533 - 3.558: 94.2097% ( 246) 00:16:11.601 3.558 - 3.584: 95.3298% ( 189) 00:16:11.601 3.584 - 3.610: 96.4796% ( 194) 00:16:11.601 3.610 - 3.635: 97.6234% ( 193) 00:16:11.601 3.635 - 3.661: 98.4769% ( 144) 00:16:11.601 3.661 - 3.686: 98.9273% ( 76) 00:16:11.601 3.686 - 3.712: 99.3125% ( 65) 00:16:11.601 3.712 - 3.738: 99.5377% ( 38) 00:16:11.601 3.738 - 3.763: 99.6266% ( 15) 00:16:11.601 3.763 - 3.789: 99.6622% ( 6) 00:16:11.601 3.789 - 3.814: 99.6740% ( 2) 00:16:11.601 4.301 - 4.326: 99.6800% ( 1) 00:16:11.601 5.709 - 5.734: 99.6859% ( 1) 00:16:11.601 6.477 - 6.502: 99.6918% ( 1) 00:16:11.601 6.605 - 6.656: 99.7096% ( 3) 00:16:11.601 6.707 - 6.758: 99.7214% ( 2) 00:16:11.601 6.758 - 6.810: 99.7274% ( 1) 00:16:11.601 6.861 - 6.912: 99.7392% ( 2) 00:16:11.601 6.963 - 7.014: 99.7452% ( 1) 00:16:11.601 7.066 - 7.117: 99.7570% ( 2) 00:16:11.601 7.117 - 7.168: 99.7866% ( 5) 00:16:11.601 7.219 - 7.270: 99.7926% ( 1) 00:16:11.601 7.270 - 7.322: 99.7985% ( 1) 00:16:11.601 7.424 - 7.475: 99.8044% ( 1) 00:16:11.601 7.475 - 7.526: 99.8281% ( 4) 00:16:11.601 7.578 - 7.629: 99.8459% ( 3) 00:16:11.601 7.629 - 7.680: 99.8518% ( 1) 00:16:11.601 7.731 - 7.782: 99.8578% ( 1) 00:16:11.601 7.834 - 7.885: 99.8637% ( 1) 00:16:11.601 8.090 - 8.141: 99.8696% ( 1) 00:16:11.601 8.141 - 8.192: 99.8755% ( 1) 00:16:11.601 8.243 - 8.294: 99.8815% ( 1) 00:16:11.601 8.550 - 8.602: 99.8933% ( 2) 00:16:11.601 8.653 - 8.704: 99.8992% ( 1) 00:16:11.601 11.162 - 11.213: 99.9052% ( 1) 00:16:11.601 12.032 - 12.083: 99.9111% ( 1) 00:16:11.601 13.414 - 13.517: 99.9170% ( 1) 00:16:11.601 17.101 - 17.203: 99.9230% ( 1) 00:16:11.601 1769.472 - 1782.579: 99.9289% ( 1) 00:16:11.601 3984.589 - 4010.803: 100.0000% ( 12) 00:16:11.601 00:16:11.601 Complete histogram 00:16:11.601 ================== 00:16:11.601 Range in us Cumulative Count 00:16:11.601 1.638 - 1.651: 0.0059% ( 1) 00:16:11.601 1.651 - 1.664: 0.2134% ( 35) 00:16:11.601 1.664 - 1.677: 0.4801% ( 45) 00:16:11.601 1.677 - 1.690: 0.5275% ( 8) 00:16:11.601 1.690 - 1.702: 2.9574% ( 410) 00:16:11.601 1.702 - 1.715: 24.7615% ( 3679) 00:16:11.601 1.715 - 1.728: 36.6147% ( 2000) 00:16:11.601 1.728 - [2024-04-26 08:50:28.471816] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:11.601 1.741: 39.6432% ( 511) 00:16:11.601 1.741 - 1.754: 40.9708% ( 224) 00:16:11.601 1.754 - 1.766: 56.0126% ( 2538) 00:16:11.602 1.766 - 1.779: 87.1511% ( 5254) 00:16:11.602 1.779 - 1.792: 94.4705% ( 1235) 00:16:11.602 1.792 - 1.805: 96.6218% ( 363) 00:16:11.602 1.805 - 1.818: 97.3627% ( 125) 00:16:11.602 1.818 - 1.830: 97.7005% ( 57) 00:16:11.602 1.830 - 1.843: 98.4057% ( 119) 00:16:11.602 1.843 - 1.856: 99.0399% ( 107) 00:16:11.602 1.856 - 1.869: 99.3184% ( 47) 00:16:11.602 1.869 - 1.882: 99.3599% ( 7) 00:16:11.602 1.882 - 1.894: 99.3659% ( 1) 00:16:11.602 1.894 - 1.907: 99.3955% ( 5) 00:16:11.602 1.907 - 1.920: 99.4073% ( 2) 00:16:11.602 1.984 - 1.997: 99.4133% ( 1) 00:16:11.602 2.022 - 2.035: 99.4192% ( 1) 00:16:11.602 2.035 - 2.048: 99.4251% ( 1) 00:16:11.602 2.099 - 2.112: 99.4310% ( 1) 00:16:11.602 2.150 - 2.163: 99.4370% ( 1) 00:16:11.602 2.240 - 2.253: 99.4429% ( 1) 00:16:11.602 4.557 - 4.582: 99.4488% ( 1) 00:16:11.602 4.582 - 4.608: 99.4548% ( 1) 00:16:11.602 5.094 - 5.120: 99.4607% ( 1) 00:16:11.602 5.376 - 5.402: 99.4725% ( 2) 00:16:11.602 5.734 - 5.760: 99.4785% ( 1) 00:16:11.602 5.837 - 5.862: 99.4844% ( 1) 00:16:11.602 5.914 - 5.939: 99.4903% ( 1) 00:16:11.602 5.990 - 6.016: 99.4962% ( 1) 00:16:11.602 6.195 - 6.221: 99.5022% ( 1) 00:16:11.602 6.246 - 6.272: 99.5081% ( 1) 00:16:11.602 6.374 - 6.400: 99.5140% ( 1) 00:16:11.602 6.426 - 6.451: 99.5199% ( 1) 00:16:11.602 6.502 - 6.528: 99.5259% ( 1) 00:16:11.602 6.528 - 6.554: 99.5318% ( 1) 00:16:11.602 6.605 - 6.656: 99.5377% ( 1) 00:16:11.602 6.758 - 6.810: 99.5436% ( 1) 00:16:11.602 6.963 - 7.014: 99.5496% ( 1) 00:16:11.602 7.014 - 7.066: 99.5555% ( 1) 00:16:11.602 7.168 - 7.219: 99.5614% ( 1) 00:16:11.602 7.373 - 7.424: 99.5674% ( 1) 00:16:11.602 7.731 - 7.782: 99.5733% ( 1) 00:16:11.602 15.565 - 15.667: 99.5792% ( 1) 00:16:11.602 16.384 - 16.486: 99.5851% ( 1) 00:16:11.602 3984.589 - 4010.803: 99.9822% ( 67) 00:16:11.602 4168.090 - 4194.304: 99.9881% ( 1) 00:16:11.602 5976.883 - 6003.098: 99.9941% ( 1) 00:16:11.602 6973.030 - 7025.459: 100.0000% ( 1) 00:16:11.602 00:16:11.602 08:50:28 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:11.602 08:50:28 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:11.602 08:50:28 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:11.602 08:50:28 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:11.602 08:50:28 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:11.602 [2024-04-26 08:50:28.661989] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:11.602 [ 00:16:11.602 { 00:16:11.602 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:11.602 "subtype": "Discovery", 00:16:11.602 "listen_addresses": [], 00:16:11.602 "allow_any_host": true, 00:16:11.602 "hosts": [] 00:16:11.602 }, 00:16:11.602 { 00:16:11.602 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:11.602 "subtype": "NVMe", 00:16:11.602 "listen_addresses": [ 00:16:11.602 { 00:16:11.602 "transport": "VFIOUSER", 00:16:11.602 "trtype": "VFIOUSER", 00:16:11.602 "adrfam": "IPv4", 00:16:11.602 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:11.602 "trsvcid": "0" 00:16:11.602 } 00:16:11.602 ], 00:16:11.602 "allow_any_host": true, 00:16:11.602 "hosts": [], 00:16:11.602 "serial_number": "SPDK1", 00:16:11.602 "model_number": "SPDK bdev Controller", 00:16:11.602 "max_namespaces": 32, 00:16:11.602 "min_cntlid": 1, 00:16:11.602 "max_cntlid": 65519, 00:16:11.602 "namespaces": [ 00:16:11.602 { 00:16:11.602 "nsid": 1, 00:16:11.602 "bdev_name": "Malloc1", 00:16:11.602 "name": "Malloc1", 00:16:11.602 "nguid": "76404C9C451647DB9270CBEDFDE15D5A", 00:16:11.602 "uuid": "76404c9c-4516-47db-9270-cbedfde15d5a" 00:16:11.602 } 00:16:11.602 ] 00:16:11.602 }, 00:16:11.602 { 00:16:11.602 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:11.602 "subtype": "NVMe", 00:16:11.602 "listen_addresses": [ 00:16:11.602 { 00:16:11.602 "transport": "VFIOUSER", 00:16:11.602 "trtype": "VFIOUSER", 00:16:11.602 "adrfam": "IPv4", 00:16:11.602 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:11.602 "trsvcid": "0" 00:16:11.602 } 00:16:11.602 ], 00:16:11.602 "allow_any_host": true, 00:16:11.602 "hosts": [], 00:16:11.602 "serial_number": "SPDK2", 00:16:11.602 "model_number": "SPDK bdev Controller", 00:16:11.602 "max_namespaces": 32, 00:16:11.602 "min_cntlid": 1, 00:16:11.602 "max_cntlid": 65519, 00:16:11.602 "namespaces": [ 00:16:11.602 { 00:16:11.602 "nsid": 1, 00:16:11.602 "bdev_name": "Malloc2", 00:16:11.602 "name": "Malloc2", 00:16:11.602 "nguid": "5C77D7D52A6443C99C282CDA3C8F744F", 00:16:11.602 "uuid": "5c77d7d5-2a64-43c9-9c28-2cda3c8f744f" 00:16:11.602 } 00:16:11.602 ] 00:16:11.602 } 00:16:11.602 ] 00:16:11.602 08:50:28 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:11.602 08:50:28 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2024730 00:16:11.602 08:50:28 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:11.602 08:50:28 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:11.602 08:50:28 -- common/autotest_common.sh@1251 -- # local i=0 00:16:11.602 08:50:28 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:11.602 08:50:28 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:11.602 08:50:28 -- common/autotest_common.sh@1262 -- # return 0 00:16:11.602 08:50:28 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:11.602 08:50:28 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:11.602 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.862 Malloc3 00:16:11.862 [2024-04-26 08:50:28.865846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:11.862 08:50:28 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:11.862 [2024-04-26 08:50:29.046128] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:11.862 08:50:29 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:11.862 Asynchronous Event Request test 00:16:11.862 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:11.862 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:11.862 Registering asynchronous event callbacks... 00:16:11.862 Starting namespace attribute notice tests for all controllers... 00:16:11.862 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:11.862 aer_cb - Changed Namespace 00:16:11.862 Cleaning up... 00:16:12.122 [ 00:16:12.122 { 00:16:12.122 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:12.122 "subtype": "Discovery", 00:16:12.122 "listen_addresses": [], 00:16:12.122 "allow_any_host": true, 00:16:12.123 "hosts": [] 00:16:12.123 }, 00:16:12.123 { 00:16:12.123 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:12.123 "subtype": "NVMe", 00:16:12.123 "listen_addresses": [ 00:16:12.123 { 00:16:12.123 "transport": "VFIOUSER", 00:16:12.123 "trtype": "VFIOUSER", 00:16:12.123 "adrfam": "IPv4", 00:16:12.123 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:12.123 "trsvcid": "0" 00:16:12.123 } 00:16:12.123 ], 00:16:12.123 "allow_any_host": true, 00:16:12.123 "hosts": [], 00:16:12.123 "serial_number": "SPDK1", 00:16:12.123 "model_number": "SPDK bdev Controller", 00:16:12.123 "max_namespaces": 32, 00:16:12.123 "min_cntlid": 1, 00:16:12.123 "max_cntlid": 65519, 00:16:12.123 "namespaces": [ 00:16:12.123 { 00:16:12.123 "nsid": 1, 00:16:12.123 "bdev_name": "Malloc1", 00:16:12.123 "name": "Malloc1", 00:16:12.123 "nguid": "76404C9C451647DB9270CBEDFDE15D5A", 00:16:12.123 "uuid": "76404c9c-4516-47db-9270-cbedfde15d5a" 00:16:12.123 }, 00:16:12.123 { 00:16:12.123 "nsid": 2, 00:16:12.123 "bdev_name": "Malloc3", 00:16:12.123 "name": "Malloc3", 00:16:12.123 "nguid": "7DE9F1FF71A749D1AD2C3F15D02A56EF", 00:16:12.123 "uuid": "7de9f1ff-71a7-49d1-ad2c-3f15d02a56ef" 00:16:12.123 } 00:16:12.123 ] 00:16:12.123 }, 00:16:12.123 { 00:16:12.123 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:12.123 "subtype": "NVMe", 00:16:12.123 "listen_addresses": [ 00:16:12.123 { 00:16:12.123 "transport": "VFIOUSER", 00:16:12.123 "trtype": "VFIOUSER", 00:16:12.123 "adrfam": "IPv4", 00:16:12.123 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:12.123 "trsvcid": "0" 00:16:12.123 } 00:16:12.123 ], 00:16:12.123 "allow_any_host": true, 00:16:12.123 "hosts": [], 00:16:12.123 "serial_number": "SPDK2", 00:16:12.123 "model_number": "SPDK bdev Controller", 00:16:12.123 "max_namespaces": 32, 00:16:12.123 "min_cntlid": 1, 00:16:12.123 "max_cntlid": 65519, 00:16:12.123 "namespaces": [ 00:16:12.123 { 00:16:12.123 "nsid": 1, 00:16:12.123 "bdev_name": "Malloc2", 00:16:12.123 "name": "Malloc2", 00:16:12.123 "nguid": "5C77D7D52A6443C99C282CDA3C8F744F", 00:16:12.123 "uuid": "5c77d7d5-2a64-43c9-9c28-2cda3c8f744f" 00:16:12.123 } 00:16:12.123 ] 00:16:12.123 } 00:16:12.123 ] 00:16:12.123 08:50:29 -- target/nvmf_vfio_user.sh@44 -- # wait 2024730 00:16:12.123 08:50:29 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:12.123 08:50:29 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:12.123 08:50:29 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:12.123 08:50:29 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:12.123 [2024-04-26 08:50:29.271634] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:16:12.123 [2024-04-26 08:50:29.271680] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024809 ] 00:16:12.123 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.123 [2024-04-26 08:50:29.302186] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:12.123 [2024-04-26 08:50:29.311697] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:12.123 [2024-04-26 08:50:29.311718] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f93ee7ab000 00:16:12.123 [2024-04-26 08:50:29.312703] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.123 [2024-04-26 08:50:29.313708] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.123 [2024-04-26 08:50:29.314713] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.123 [2024-04-26 08:50:29.315723] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.123 [2024-04-26 08:50:29.316726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.123 [2024-04-26 08:50:29.317732] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.123 [2024-04-26 08:50:29.318743] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:12.123 [2024-04-26 08:50:29.319749] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:12.123 [2024-04-26 08:50:29.320757] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:12.123 [2024-04-26 08:50:29.320772] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f93ee7a0000 00:16:12.123 [2024-04-26 08:50:29.321664] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:12.123 [2024-04-26 08:50:29.334547] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:12.123 [2024-04-26 08:50:29.334570] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:12.123 [2024-04-26 08:50:29.339658] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:12.123 [2024-04-26 08:50:29.339695] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:12.123 [2024-04-26 08:50:29.339760] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:12.123 [2024-04-26 08:50:29.339780] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:12.123 [2024-04-26 08:50:29.339787] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:12.123 [2024-04-26 08:50:29.340664] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:12.123 [2024-04-26 08:50:29.340676] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:12.123 [2024-04-26 08:50:29.340685] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:12.123 [2024-04-26 08:50:29.341665] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:12.123 [2024-04-26 08:50:29.341676] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:12.123 [2024-04-26 08:50:29.341685] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:12.123 [2024-04-26 08:50:29.342672] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:12.123 [2024-04-26 08:50:29.342683] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:12.123 [2024-04-26 08:50:29.343680] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:12.123 [2024-04-26 08:50:29.343690] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:12.123 [2024-04-26 08:50:29.343697] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:12.123 [2024-04-26 08:50:29.343705] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:12.123 [2024-04-26 08:50:29.343812] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:12.123 [2024-04-26 08:50:29.343818] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:12.123 [2024-04-26 08:50:29.343827] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:12.123 [2024-04-26 08:50:29.344687] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:12.123 [2024-04-26 08:50:29.345691] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:12.123 [2024-04-26 08:50:29.346706] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:12.123 [2024-04-26 08:50:29.347706] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.123 [2024-04-26 08:50:29.347746] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:12.123 [2024-04-26 08:50:29.348712] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:12.123 [2024-04-26 08:50:29.348722] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:12.123 [2024-04-26 08:50:29.348729] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:12.123 [2024-04-26 08:50:29.348748] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:12.123 [2024-04-26 08:50:29.348761] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:12.123 [2024-04-26 08:50:29.348776] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:12.123 [2024-04-26 08:50:29.348783] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.123 [2024-04-26 08:50:29.348796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.123 [2024-04-26 08:50:29.353462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:12.123 [2024-04-26 08:50:29.353475] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:12.124 [2024-04-26 08:50:29.353481] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:12.124 [2024-04-26 08:50:29.353487] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:12.124 [2024-04-26 08:50:29.353493] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:12.124 [2024-04-26 08:50:29.353500] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:12.124 [2024-04-26 08:50:29.353506] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:12.124 [2024-04-26 08:50:29.353512] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:12.124 [2024-04-26 08:50:29.353521] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:12.124 [2024-04-26 08:50:29.353531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:12.124 [2024-04-26 08:50:29.361456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:12.124 [2024-04-26 08:50:29.361473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-04-26 08:50:29.361484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-04-26 08:50:29.361493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-04-26 08:50:29.361502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:12.124 [2024-04-26 08:50:29.361508] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:12.124 [2024-04-26 08:50:29.361518] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:12.124 [2024-04-26 08:50:29.361528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:12.384 [2024-04-26 08:50:29.369457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:12.384 [2024-04-26 08:50:29.369467] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:12.384 [2024-04-26 08:50:29.369474] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.369485] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.369492] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.369501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:12.384 [2024-04-26 08:50:29.377458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:12.384 [2024-04-26 08:50:29.377504] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.377514] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.377523] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:12.384 [2024-04-26 08:50:29.377529] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:12.384 [2024-04-26 08:50:29.377536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:12.384 [2024-04-26 08:50:29.385457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:12.384 [2024-04-26 08:50:29.385470] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:12.384 [2024-04-26 08:50:29.385481] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.385490] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.385499] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:12.384 [2024-04-26 08:50:29.385504] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.384 [2024-04-26 08:50:29.385512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.384 [2024-04-26 08:50:29.393458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:12.384 [2024-04-26 08:50:29.393474] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.393484] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.393492] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:12.384 [2024-04-26 08:50:29.393498] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.384 [2024-04-26 08:50:29.393505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.384 [2024-04-26 08:50:29.401458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:12.384 [2024-04-26 08:50:29.401470] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.401479] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.401489] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.401496] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.401503] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.401510] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:12.384 [2024-04-26 08:50:29.401516] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:12.384 [2024-04-26 08:50:29.401522] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:12.384 [2024-04-26 08:50:29.401540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:12.384 [2024-04-26 08:50:29.409459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:12.384 [2024-04-26 08:50:29.409475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:12.384 [2024-04-26 08:50:29.417458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:12.384 [2024-04-26 08:50:29.417472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:12.384 [2024-04-26 08:50:29.425459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:12.384 [2024-04-26 08:50:29.425474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:12.384 [2024-04-26 08:50:29.433457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:12.384 [2024-04-26 08:50:29.433472] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:12.384 [2024-04-26 08:50:29.433478] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:12.384 [2024-04-26 08:50:29.433483] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:12.384 [2024-04-26 08:50:29.433490] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:12.384 [2024-04-26 08:50:29.433497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:12.385 [2024-04-26 08:50:29.433505] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:12.385 [2024-04-26 08:50:29.433511] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:12.385 [2024-04-26 08:50:29.433518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:12.385 [2024-04-26 08:50:29.433526] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:12.385 [2024-04-26 08:50:29.433532] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:12.385 [2024-04-26 08:50:29.433538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:12.385 [2024-04-26 08:50:29.433547] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:12.385 [2024-04-26 08:50:29.433552] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:12.385 [2024-04-26 08:50:29.433559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:12.385 [2024-04-26 08:50:29.441458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:12.385 [2024-04-26 08:50:29.441475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:12.385 [2024-04-26 08:50:29.441486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:12.385 [2024-04-26 08:50:29.441494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:12.385 ===================================================== 00:16:12.385 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:12.385 ===================================================== 00:16:12.385 Controller Capabilities/Features 00:16:12.385 ================================ 00:16:12.385 Vendor ID: 4e58 00:16:12.385 Subsystem Vendor ID: 4e58 00:16:12.385 Serial Number: SPDK2 00:16:12.385 Model Number: SPDK bdev Controller 00:16:12.385 Firmware Version: 24.05 00:16:12.385 Recommended Arb Burst: 6 00:16:12.385 IEEE OUI Identifier: 8d 6b 50 00:16:12.385 Multi-path I/O 00:16:12.385 May have multiple subsystem ports: Yes 00:16:12.385 May have multiple controllers: Yes 00:16:12.385 Associated with SR-IOV VF: No 00:16:12.385 Max Data Transfer Size: 131072 00:16:12.385 Max Number of Namespaces: 32 00:16:12.385 Max Number of I/O Queues: 127 00:16:12.385 NVMe Specification Version (VS): 1.3 00:16:12.385 NVMe Specification Version (Identify): 1.3 00:16:12.385 Maximum Queue Entries: 256 00:16:12.385 Contiguous Queues Required: Yes 00:16:12.385 Arbitration Mechanisms Supported 00:16:12.385 Weighted Round Robin: Not Supported 00:16:12.385 Vendor Specific: Not Supported 00:16:12.385 Reset Timeout: 15000 ms 00:16:12.385 Doorbell Stride: 4 bytes 00:16:12.385 NVM Subsystem Reset: Not Supported 00:16:12.385 Command Sets Supported 00:16:12.385 NVM Command Set: Supported 00:16:12.385 Boot Partition: Not Supported 00:16:12.385 Memory Page Size Minimum: 4096 bytes 00:16:12.385 Memory Page Size Maximum: 4096 bytes 00:16:12.385 Persistent Memory Region: Not Supported 00:16:12.385 Optional Asynchronous Events Supported 00:16:12.385 Namespace Attribute Notices: Supported 00:16:12.385 Firmware Activation Notices: Not Supported 00:16:12.385 ANA Change Notices: Not Supported 00:16:12.385 PLE Aggregate Log Change Notices: Not Supported 00:16:12.385 LBA Status Info Alert Notices: Not Supported 00:16:12.385 EGE Aggregate Log Change Notices: Not Supported 00:16:12.385 Normal NVM Subsystem Shutdown event: Not Supported 00:16:12.385 Zone Descriptor Change Notices: Not Supported 00:16:12.385 Discovery Log Change Notices: Not Supported 00:16:12.385 Controller Attributes 00:16:12.385 128-bit Host Identifier: Supported 00:16:12.385 Non-Operational Permissive Mode: Not Supported 00:16:12.385 NVM Sets: Not Supported 00:16:12.385 Read Recovery Levels: Not Supported 00:16:12.385 Endurance Groups: Not Supported 00:16:12.385 Predictable Latency Mode: Not Supported 00:16:12.385 Traffic Based Keep ALive: Not Supported 00:16:12.385 Namespace Granularity: Not Supported 00:16:12.385 SQ Associations: Not Supported 00:16:12.385 UUID List: Not Supported 00:16:12.385 Multi-Domain Subsystem: Not Supported 00:16:12.385 Fixed Capacity Management: Not Supported 00:16:12.385 Variable Capacity Management: Not Supported 00:16:12.385 Delete Endurance Group: Not Supported 00:16:12.385 Delete NVM Set: Not Supported 00:16:12.385 Extended LBA Formats Supported: Not Supported 00:16:12.385 Flexible Data Placement Supported: Not Supported 00:16:12.385 00:16:12.385 Controller Memory Buffer Support 00:16:12.385 ================================ 00:16:12.385 Supported: No 00:16:12.385 00:16:12.385 Persistent Memory Region Support 00:16:12.385 ================================ 00:16:12.385 Supported: No 00:16:12.385 00:16:12.385 Admin Command Set Attributes 00:16:12.385 ============================ 00:16:12.385 Security Send/Receive: Not Supported 00:16:12.385 Format NVM: Not Supported 00:16:12.385 Firmware Activate/Download: Not Supported 00:16:12.385 Namespace Management: Not Supported 00:16:12.385 Device Self-Test: Not Supported 00:16:12.385 Directives: Not Supported 00:16:12.385 NVMe-MI: Not Supported 00:16:12.385 Virtualization Management: Not Supported 00:16:12.385 Doorbell Buffer Config: Not Supported 00:16:12.385 Get LBA Status Capability: Not Supported 00:16:12.385 Command & Feature Lockdown Capability: Not Supported 00:16:12.385 Abort Command Limit: 4 00:16:12.385 Async Event Request Limit: 4 00:16:12.385 Number of Firmware Slots: N/A 00:16:12.385 Firmware Slot 1 Read-Only: N/A 00:16:12.385 Firmware Activation Without Reset: N/A 00:16:12.385 Multiple Update Detection Support: N/A 00:16:12.385 Firmware Update Granularity: No Information Provided 00:16:12.385 Per-Namespace SMART Log: No 00:16:12.385 Asymmetric Namespace Access Log Page: Not Supported 00:16:12.385 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:12.385 Command Effects Log Page: Supported 00:16:12.385 Get Log Page Extended Data: Supported 00:16:12.385 Telemetry Log Pages: Not Supported 00:16:12.385 Persistent Event Log Pages: Not Supported 00:16:12.385 Supported Log Pages Log Page: May Support 00:16:12.385 Commands Supported & Effects Log Page: Not Supported 00:16:12.385 Feature Identifiers & Effects Log Page:May Support 00:16:12.385 NVMe-MI Commands & Effects Log Page: May Support 00:16:12.385 Data Area 4 for Telemetry Log: Not Supported 00:16:12.385 Error Log Page Entries Supported: 128 00:16:12.385 Keep Alive: Supported 00:16:12.385 Keep Alive Granularity: 10000 ms 00:16:12.385 00:16:12.385 NVM Command Set Attributes 00:16:12.385 ========================== 00:16:12.385 Submission Queue Entry Size 00:16:12.385 Max: 64 00:16:12.385 Min: 64 00:16:12.385 Completion Queue Entry Size 00:16:12.385 Max: 16 00:16:12.385 Min: 16 00:16:12.385 Number of Namespaces: 32 00:16:12.385 Compare Command: Supported 00:16:12.385 Write Uncorrectable Command: Not Supported 00:16:12.385 Dataset Management Command: Supported 00:16:12.385 Write Zeroes Command: Supported 00:16:12.385 Set Features Save Field: Not Supported 00:16:12.385 Reservations: Not Supported 00:16:12.385 Timestamp: Not Supported 00:16:12.385 Copy: Supported 00:16:12.385 Volatile Write Cache: Present 00:16:12.385 Atomic Write Unit (Normal): 1 00:16:12.385 Atomic Write Unit (PFail): 1 00:16:12.385 Atomic Compare & Write Unit: 1 00:16:12.385 Fused Compare & Write: Supported 00:16:12.385 Scatter-Gather List 00:16:12.385 SGL Command Set: Supported (Dword aligned) 00:16:12.385 SGL Keyed: Not Supported 00:16:12.385 SGL Bit Bucket Descriptor: Not Supported 00:16:12.385 SGL Metadata Pointer: Not Supported 00:16:12.385 Oversized SGL: Not Supported 00:16:12.385 SGL Metadata Address: Not Supported 00:16:12.385 SGL Offset: Not Supported 00:16:12.385 Transport SGL Data Block: Not Supported 00:16:12.385 Replay Protected Memory Block: Not Supported 00:16:12.385 00:16:12.385 Firmware Slot Information 00:16:12.385 ========================= 00:16:12.385 Active slot: 1 00:16:12.385 Slot 1 Firmware Revision: 24.05 00:16:12.385 00:16:12.385 00:16:12.385 Commands Supported and Effects 00:16:12.385 ============================== 00:16:12.385 Admin Commands 00:16:12.385 -------------- 00:16:12.385 Get Log Page (02h): Supported 00:16:12.385 Identify (06h): Supported 00:16:12.385 Abort (08h): Supported 00:16:12.385 Set Features (09h): Supported 00:16:12.385 Get Features (0Ah): Supported 00:16:12.385 Asynchronous Event Request (0Ch): Supported 00:16:12.385 Keep Alive (18h): Supported 00:16:12.385 I/O Commands 00:16:12.385 ------------ 00:16:12.385 Flush (00h): Supported LBA-Change 00:16:12.385 Write (01h): Supported LBA-Change 00:16:12.385 Read (02h): Supported 00:16:12.385 Compare (05h): Supported 00:16:12.385 Write Zeroes (08h): Supported LBA-Change 00:16:12.385 Dataset Management (09h): Supported LBA-Change 00:16:12.385 Copy (19h): Supported LBA-Change 00:16:12.385 Unknown (79h): Supported LBA-Change 00:16:12.386 Unknown (7Ah): Supported 00:16:12.386 00:16:12.386 Error Log 00:16:12.386 ========= 00:16:12.386 00:16:12.386 Arbitration 00:16:12.386 =========== 00:16:12.386 Arbitration Burst: 1 00:16:12.386 00:16:12.386 Power Management 00:16:12.386 ================ 00:16:12.386 Number of Power States: 1 00:16:12.386 Current Power State: Power State #0 00:16:12.386 Power State #0: 00:16:12.386 Max Power: 0.00 W 00:16:12.386 Non-Operational State: Operational 00:16:12.386 Entry Latency: Not Reported 00:16:12.386 Exit Latency: Not Reported 00:16:12.386 Relative Read Throughput: 0 00:16:12.386 Relative Read Latency: 0 00:16:12.386 Relative Write Throughput: 0 00:16:12.386 Relative Write Latency: 0 00:16:12.386 Idle Power: Not Reported 00:16:12.386 Active Power: Not Reported 00:16:12.386 Non-Operational Permissive Mode: Not Supported 00:16:12.386 00:16:12.386 Health Information 00:16:12.386 ================== 00:16:12.386 Critical Warnings: 00:16:12.386 Available Spare Space: OK 00:16:12.386 Temperature: OK 00:16:12.386 Device Reliability: OK 00:16:12.386 Read Only: No 00:16:12.386 Volatile Memory Backup: OK 00:16:12.386 Current Temperature: 0 Kelvin (-2[2024-04-26 08:50:29.441587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:12.386 [2024-04-26 08:50:29.449458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:12.386 [2024-04-26 08:50:29.449487] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:12.386 [2024-04-26 08:50:29.449498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.386 [2024-04-26 08:50:29.449506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.386 [2024-04-26 08:50:29.449514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.386 [2024-04-26 08:50:29.449522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:12.386 [2024-04-26 08:50:29.449574] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:12.386 [2024-04-26 08:50:29.449586] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:12.386 [2024-04-26 08:50:29.450583] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.386 [2024-04-26 08:50:29.450629] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:12.386 [2024-04-26 08:50:29.450637] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:12.386 [2024-04-26 08:50:29.451587] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:12.386 [2024-04-26 08:50:29.451600] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:12.386 [2024-04-26 08:50:29.451648] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:12.386 [2024-04-26 08:50:29.454637] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:12.386 73 Celsius) 00:16:12.386 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:12.386 Available Spare: 0% 00:16:12.386 Available Spare Threshold: 0% 00:16:12.386 Life Percentage Used: 0% 00:16:12.386 Data Units Read: 0 00:16:12.386 Data Units Written: 0 00:16:12.386 Host Read Commands: 0 00:16:12.386 Host Write Commands: 0 00:16:12.386 Controller Busy Time: 0 minutes 00:16:12.386 Power Cycles: 0 00:16:12.386 Power On Hours: 0 hours 00:16:12.386 Unsafe Shutdowns: 0 00:16:12.386 Unrecoverable Media Errors: 0 00:16:12.386 Lifetime Error Log Entries: 0 00:16:12.386 Warning Temperature Time: 0 minutes 00:16:12.386 Critical Temperature Time: 0 minutes 00:16:12.386 00:16:12.386 Number of Queues 00:16:12.386 ================ 00:16:12.386 Number of I/O Submission Queues: 127 00:16:12.386 Number of I/O Completion Queues: 127 00:16:12.386 00:16:12.386 Active Namespaces 00:16:12.386 ================= 00:16:12.386 Namespace ID:1 00:16:12.386 Error Recovery Timeout: Unlimited 00:16:12.386 Command Set Identifier: NVM (00h) 00:16:12.386 Deallocate: Supported 00:16:12.386 Deallocated/Unwritten Error: Not Supported 00:16:12.386 Deallocated Read Value: Unknown 00:16:12.386 Deallocate in Write Zeroes: Not Supported 00:16:12.386 Deallocated Guard Field: 0xFFFF 00:16:12.386 Flush: Supported 00:16:12.386 Reservation: Supported 00:16:12.386 Namespace Sharing Capabilities: Multiple Controllers 00:16:12.386 Size (in LBAs): 131072 (0GiB) 00:16:12.386 Capacity (in LBAs): 131072 (0GiB) 00:16:12.386 Utilization (in LBAs): 131072 (0GiB) 00:16:12.386 NGUID: 5C77D7D52A6443C99C282CDA3C8F744F 00:16:12.386 UUID: 5c77d7d5-2a64-43c9-9c28-2cda3c8f744f 00:16:12.386 Thin Provisioning: Not Supported 00:16:12.386 Per-NS Atomic Units: Yes 00:16:12.386 Atomic Boundary Size (Normal): 0 00:16:12.386 Atomic Boundary Size (PFail): 0 00:16:12.386 Atomic Boundary Offset: 0 00:16:12.386 Maximum Single Source Range Length: 65535 00:16:12.386 Maximum Copy Length: 65535 00:16:12.386 Maximum Source Range Count: 1 00:16:12.386 NGUID/EUI64 Never Reused: No 00:16:12.386 Namespace Write Protected: No 00:16:12.386 Number of LBA Formats: 1 00:16:12.386 Current LBA Format: LBA Format #00 00:16:12.386 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:12.386 00:16:12.386 08:50:29 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:12.386 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.645 [2024-04-26 08:50:29.664430] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:17.955 [2024-04-26 08:50:34.771709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:17.955 Initializing NVMe Controllers 00:16:17.955 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:17.955 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:17.955 Initialization complete. Launching workers. 00:16:17.955 ======================================================== 00:16:17.955 Latency(us) 00:16:17.955 Device Information : IOPS MiB/s Average min max 00:16:17.955 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39920.93 155.94 3206.16 913.30 8683.11 00:16:17.955 ======================================================== 00:16:17.955 Total : 39920.93 155.94 3206.16 913.30 8683.11 00:16:17.955 00:16:17.955 08:50:34 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:17.955 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.955 [2024-04-26 08:50:34.990453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:23.226 [2024-04-26 08:50:40.010986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.226 Initializing NVMe Controllers 00:16:23.226 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:23.226 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:23.226 Initialization complete. Launching workers. 00:16:23.226 ======================================================== 00:16:23.226 Latency(us) 00:16:23.226 Device Information : IOPS MiB/s Average min max 00:16:23.226 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39940.76 156.02 3204.58 929.20 7668.39 00:16:23.226 ======================================================== 00:16:23.226 Total : 39940.76 156.02 3204.58 929.20 7668.39 00:16:23.226 00:16:23.226 08:50:40 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:23.226 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.226 [2024-04-26 08:50:40.232101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:28.495 [2024-04-26 08:50:45.369551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:28.495 Initializing NVMe Controllers 00:16:28.495 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:28.495 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:28.495 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:28.495 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:28.495 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:28.495 Initialization complete. Launching workers. 00:16:28.495 Starting thread on core 2 00:16:28.495 Starting thread on core 3 00:16:28.495 Starting thread on core 1 00:16:28.495 08:50:45 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:28.495 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.495 [2024-04-26 08:50:45.668914] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:31.787 [2024-04-26 08:50:48.754478] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:31.787 Initializing NVMe Controllers 00:16:31.787 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.787 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:31.787 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:31.787 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:31.787 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:31.787 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:31.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:31.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:31.788 Initialization complete. Launching workers. 00:16:31.788 Starting thread on core 1 with urgent priority queue 00:16:31.788 Starting thread on core 2 with urgent priority queue 00:16:31.788 Starting thread on core 3 with urgent priority queue 00:16:31.788 Starting thread on core 0 with urgent priority queue 00:16:31.788 SPDK bdev Controller (SPDK2 ) core 0: 8026.33 IO/s 12.46 secs/100000 ios 00:16:31.788 SPDK bdev Controller (SPDK2 ) core 1: 8042.67 IO/s 12.43 secs/100000 ios 00:16:31.788 SPDK bdev Controller (SPDK2 ) core 2: 7561.33 IO/s 13.23 secs/100000 ios 00:16:31.788 SPDK bdev Controller (SPDK2 ) core 3: 9848.33 IO/s 10.15 secs/100000 ios 00:16:31.788 ======================================================== 00:16:31.788 00:16:31.788 08:50:48 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:31.788 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.047 [2024-04-26 08:50:49.051914] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:32.047 [2024-04-26 08:50:49.061984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:32.047 Initializing NVMe Controllers 00:16:32.047 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:32.047 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:32.047 Namespace ID: 1 size: 0GB 00:16:32.047 Initialization complete. 00:16:32.047 INFO: using host memory buffer for IO 00:16:32.047 Hello world! 00:16:32.047 08:50:49 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:32.047 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.317 [2024-04-26 08:50:49.339670] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:33.286 Initializing NVMe Controllers 00:16:33.286 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:33.286 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:33.286 Initialization complete. Launching workers. 00:16:33.286 submit (in ns) avg, min, max = 7278.8, 3048.8, 4000466.4 00:16:33.286 complete (in ns) avg, min, max = 18721.6, 1684.0, 7050092.0 00:16:33.286 00:16:33.286 Submit histogram 00:16:33.286 ================ 00:16:33.286 Range in us Cumulative Count 00:16:33.286 3.046 - 3.059: 0.0058% ( 1) 00:16:33.286 3.059 - 3.072: 0.0526% ( 8) 00:16:33.286 3.072 - 3.085: 0.1636% ( 19) 00:16:33.287 3.085 - 3.098: 0.5549% ( 67) 00:16:33.287 3.098 - 3.110: 1.2617% ( 121) 00:16:33.287 3.110 - 3.123: 2.2839% ( 175) 00:16:33.287 3.123 - 3.136: 4.3750% ( 358) 00:16:33.287 3.136 - 3.149: 7.8271% ( 591) 00:16:33.287 3.149 - 3.162: 11.4953% ( 628) 00:16:33.287 3.162 - 3.174: 16.1449% ( 796) 00:16:33.287 3.174 - 3.187: 21.6939% ( 950) 00:16:33.287 3.187 - 3.200: 26.8224% ( 878) 00:16:33.287 3.200 - 3.213: 32.5759% ( 985) 00:16:33.287 3.213 - 3.226: 38.6098% ( 1033) 00:16:33.287 3.226 - 3.238: 45.0643% ( 1105) 00:16:33.287 3.238 - 3.251: 50.4965% ( 930) 00:16:33.287 3.251 - 3.264: 54.1881% ( 632) 00:16:33.287 3.264 - 3.277: 56.7932% ( 446) 00:16:33.287 3.277 - 3.302: 63.0315% ( 1068) 00:16:33.287 3.302 - 3.328: 68.5456% ( 944) 00:16:33.287 3.328 - 3.354: 73.9136% ( 919) 00:16:33.287 3.354 - 3.379: 81.2383% ( 1254) 00:16:33.287 3.379 - 3.405: 84.8715% ( 622) 00:16:33.287 3.405 - 3.430: 86.2734% ( 240) 00:16:33.287 3.430 - 3.456: 87.3423% ( 183) 00:16:33.287 3.456 - 3.482: 88.6215% ( 219) 00:16:33.287 3.482 - 3.507: 90.2395% ( 277) 00:16:33.287 3.507 - 3.533: 92.0853% ( 316) 00:16:33.287 3.533 - 3.558: 93.6916% ( 275) 00:16:33.287 3.558 - 3.584: 94.9241% ( 211) 00:16:33.287 3.584 - 3.610: 96.0222% ( 188) 00:16:33.287 3.610 - 3.635: 97.2313% ( 207) 00:16:33.287 3.635 - 3.661: 97.9030% ( 115) 00:16:33.287 3.661 - 3.686: 98.3178% ( 71) 00:16:33.287 3.686 - 3.712: 98.6507% ( 57) 00:16:33.287 3.712 - 3.738: 98.9077% ( 44) 00:16:33.287 3.738 - 3.763: 98.9661% ( 10) 00:16:33.287 3.763 - 3.789: 99.0304% ( 11) 00:16:33.287 3.789 - 3.814: 99.0771% ( 8) 00:16:33.287 3.814 - 3.840: 99.1063% ( 5) 00:16:33.287 3.840 - 3.866: 99.1180% ( 2) 00:16:33.287 3.866 - 3.891: 99.1414% ( 4) 00:16:33.287 3.891 - 3.917: 99.1530% ( 2) 00:16:33.287 3.917 - 3.942: 99.1764% ( 4) 00:16:33.287 3.942 - 3.968: 99.1881% ( 2) 00:16:33.287 3.968 - 3.994: 99.1939% ( 1) 00:16:33.287 4.019 - 4.045: 99.2056% ( 2) 00:16:33.287 4.045 - 4.070: 99.2173% ( 2) 00:16:33.287 4.096 - 4.122: 99.2231% ( 1) 00:16:33.287 4.122 - 4.147: 99.2290% ( 1) 00:16:33.287 4.147 - 4.173: 99.2348% ( 1) 00:16:33.287 4.173 - 4.198: 99.2465% ( 2) 00:16:33.287 4.198 - 4.224: 99.2582% ( 2) 00:16:33.287 4.250 - 4.275: 99.2640% ( 1) 00:16:33.287 4.275 - 4.301: 99.2699% ( 1) 00:16:33.287 4.301 - 4.326: 99.2757% ( 1) 00:16:33.287 4.326 - 4.352: 99.2815% ( 1) 00:16:33.287 4.352 - 4.378: 99.2874% ( 1) 00:16:33.287 4.403 - 4.429: 99.2991% ( 2) 00:16:33.287 4.429 - 4.454: 99.3049% ( 1) 00:16:33.287 4.454 - 4.480: 99.3107% ( 1) 00:16:33.287 4.557 - 4.582: 99.3166% ( 1) 00:16:33.287 4.582 - 4.608: 99.3224% ( 1) 00:16:33.287 4.608 - 4.634: 99.3341% ( 2) 00:16:33.287 4.634 - 4.659: 99.3400% ( 1) 00:16:33.287 4.710 - 4.736: 99.3516% ( 2) 00:16:33.287 4.736 - 4.762: 99.3633% ( 2) 00:16:33.287 4.787 - 4.813: 99.3808% ( 3) 00:16:33.287 4.838 - 4.864: 99.3925% ( 2) 00:16:33.287 4.864 - 4.890: 99.4042% ( 2) 00:16:33.287 4.941 - 4.966: 99.4100% ( 1) 00:16:33.287 4.966 - 4.992: 99.4217% ( 2) 00:16:33.287 4.992 - 5.018: 99.4334% ( 2) 00:16:33.287 5.069 - 5.094: 99.4393% ( 1) 00:16:33.287 5.120 - 5.146: 99.4451% ( 1) 00:16:33.287 5.146 - 5.171: 99.4568% ( 2) 00:16:33.287 5.248 - 5.274: 99.4626% ( 1) 00:16:33.287 5.325 - 5.350: 99.4685% ( 1) 00:16:33.287 5.402 - 5.427: 99.4743% ( 1) 00:16:33.287 5.453 - 5.478: 99.4918% ( 3) 00:16:33.287 5.530 - 5.555: 99.5035% ( 2) 00:16:33.287 5.555 - 5.581: 99.5093% ( 1) 00:16:33.287 5.606 - 5.632: 99.5152% ( 1) 00:16:33.287 5.632 - 5.658: 99.5269% ( 2) 00:16:33.287 5.734 - 5.760: 99.5327% ( 1) 00:16:33.287 5.760 - 5.786: 99.5386% ( 1) 00:16:33.287 5.811 - 5.837: 99.5444% ( 1) 00:16:33.287 5.914 - 5.939: 99.5502% ( 1) 00:16:33.287 5.965 - 5.990: 99.5561% ( 1) 00:16:33.287 6.016 - 6.042: 99.5619% ( 1) 00:16:33.287 6.093 - 6.118: 99.5678% ( 1) 00:16:33.287 6.118 - 6.144: 99.5794% ( 2) 00:16:33.287 6.246 - 6.272: 99.5853% ( 1) 00:16:33.287 6.426 - 6.451: 99.5970% ( 2) 00:16:33.287 6.451 - 6.477: 99.6028% ( 1) 00:16:33.287 6.477 - 6.502: 99.6086% ( 1) 00:16:33.287 6.502 - 6.528: 99.6145% ( 1) 00:16:33.287 6.656 - 6.707: 99.6203% ( 1) 00:16:33.287 6.707 - 6.758: 99.6262% ( 1) 00:16:33.287 6.861 - 6.912: 99.6320% ( 1) 00:16:33.287 6.912 - 6.963: 99.6379% ( 1) 00:16:33.287 6.963 - 7.014: 99.6612% ( 4) 00:16:33.287 7.014 - 7.066: 99.6729% ( 2) 00:16:33.287 7.117 - 7.168: 99.6787% ( 1) 00:16:33.287 7.168 - 7.219: 99.6846% ( 1) 00:16:33.287 7.270 - 7.322: 99.7021% ( 3) 00:16:33.287 7.322 - 7.373: 99.7079% ( 1) 00:16:33.287 7.373 - 7.424: 99.7138% ( 1) 00:16:33.287 7.424 - 7.475: 99.7371% ( 4) 00:16:33.287 7.782 - 7.834: 99.7430% ( 1) 00:16:33.287 7.987 - 8.038: 99.7547% ( 2) 00:16:33.287 8.038 - 8.090: 99.7605% ( 1) 00:16:33.287 8.090 - 8.141: 99.7722% ( 2) 00:16:33.287 8.192 - 8.243: 99.7780% ( 1) 00:16:33.287 8.243 - 8.294: 99.7839% ( 1) 00:16:33.287 8.294 - 8.346: 99.7897% ( 1) 00:16:33.287 8.397 - 8.448: 99.7956% ( 1) 00:16:33.287 8.448 - 8.499: 99.8072% ( 2) 00:16:33.287 8.602 - 8.653: 99.8189% ( 2) 00:16:33.287 8.755 - 8.806: 99.8248% ( 1) 00:16:33.287 8.806 - 8.858: 99.8364% ( 2) 00:16:33.287 9.062 - 9.114: 99.8423% ( 1) 00:16:33.287 9.421 - 9.472: 99.8481% ( 1) 00:16:33.287 9.779 - 9.830: 99.8540% ( 1) 00:16:33.287 10.035 - 10.086: 99.8598% ( 1) 00:16:33.287 10.240 - 10.291: 99.8657% ( 1) 00:16:33.287 12.390 - 12.442: 99.8715% ( 1) 00:16:33.287 12.698 - 12.749: 99.8773% ( 1) 00:16:33.287 13.517 - 13.619: 99.8832% ( 1) 00:16:33.287 15.155 - 15.258: 99.8890% ( 1) 00:16:33.287 16.896 - 16.998: 99.8949% ( 1) 00:16:33.287 19.354 - 19.456: 99.9007% ( 1) 00:16:33.287 3984.589 - 4010.803: 100.0000% ( 17) 00:16:33.287 00:16:33.287 Complete histogram 00:16:33.287 ================== 00:16:33.287 Range in us Cumulative Count 00:16:33.287 1.677 - 1.690: 0.0584% ( 10) 00:16:33.287 1.690 - 1.702: 11.5362% ( 1965) 00:16:33.287 1.702 - 1.715: 67.3598% ( 9557) 00:16:33.287 1.715 - 1.728: 81.5771% ( 2434) 00:16:33.287 1.728 - 1.741: 84.9357% ( 575) 00:16:33.287 1.741 - 1.754: 89.7079% ( 817) 00:16:33.287 1.754 - 1.766: 93.1367% ( 587) 00:16:33.287 1.766 - 1.779: 94.4568% ( 226) 00:16:33.287 1.779 - 1.792: 95.6717% ( 208) 00:16:33.287 1.792 - 1.805: 96.1799% ( 87) 00:16:33.287 1.805 - 1.818: 97.1320% ( 163) 00:16:33.287 1.818 - 1.830: 97.6168% ( 83) 00:16:33.287 1.830 - 1.843: 97.7336% ( 20) 00:16:33.287 1.843 - 1.856: 97.8271% ( 16) 00:16:33.287 1.856 - 1.869: 98.0199% ( 33) 00:16:33.287 1.869 - 1.882: 98.2301% ( 36) 00:16:33.287 1.882 - 1.894: 98.3002% ( 12) 00:16:33.287 1.894 - 1.907: 98.3762% ( 13) 00:16:33.287 1.907 - 1.920: 98.4463% ( 12) 00:16:33.287 1.920 - 1.933: 98.4930% ( 8) 00:16:33.287 1.933 - 1.946: 98.5397% ( 8) 00:16:33.287 1.946 - 1.958: 98.6040% ( 11) 00:16:33.287 1.958 - 1.971: 98.6215% ( 3) 00:16:33.287 1.971 - 1.984: 98.6682% ( 8) 00:16:33.287 1.984 - 1.997: 98.6974% ( 5) 00:16:33.287 1.997 - 2.010: 98.7266% ( 5) 00:16:33.287 2.010 - 2.022: 98.7442% ( 3) 00:16:33.288 2.022 - 2.035: 98.7617% ( 3) 00:16:33.288 2.035 - 2.048: 98.7909% ( 5) 00:16:33.288 2.048 - 2.061: 98.7967% ( 1) 00:16:33.288 2.061 - 2.074: 98.8318% ( 6) 00:16:33.288 2.074 - 2.086: 98.8610% ( 5) 00:16:33.288 2.086 - 2.099: 98.8727% ( 2) 00:16:33.288 2.099 - 2.112: 98.8785% ( 1) 00:16:33.288 2.112 - 2.125: 98.9077% ( 5) 00:16:33.288 2.138 - 2.150: 98.9136% ( 1) 00:16:33.288 2.150 - 2.163: 98.9194% ( 1) 00:16:33.288 2.163 - 2.176: 98.9252% ( 1) 00:16:33.288 2.176 - 2.189: 98.9369% ( 2) 00:16:33.288 2.189 - 2.202: 98.9428% ( 1) 00:16:33.288 2.202 - 2.214: 98.9544% ( 2) 00:16:33.288 2.214 - 2.227: 98.9603% ( 1) 00:16:33.288 2.227 - 2.240: 98.9661% ( 1) 00:16:33.288 2.253 - 2.266: 98.9720% ( 1) 00:16:33.288 2.266 - 2.278: 98.9778% ( 1) 00:16:33.288 2.278 - 2.291: 98.9895% ( 2) 00:16:33.288 2.291 - 2.304: 98.9953% ( 1) 00:16:33.288 2.317 - 2.330: 99.0070% ( 2) 00:16:33.288 2.419 - 2.432: 99.0129% ( 1) 00:16:33.288 2.445 - 2.458: 99.0245% ( 2) 00:16:33.288 2.483 - 2.496: 99.0304% ( 1) 00:16:33.288 2.496 - 2.509: 99.0421% ( 2) 00:16:33.288 2.509 - 2.522: 99.0537% ( 2) 00:16:33.288 2.522 - 2.534: 99.0654% ( 2) 00:16:33.288 2.573 - 2.586: 99.0771% ( 2) 00:16:33.288 2.586 - 2.598: 99.0829% ( 1) 00:16:33.288 2.598 - 2.611: 99.0888% ( 1) 00:16:33.288 2.611 - 2.624: 99.1005% ( 2) 00:16:33.288 2.624 - 2.637: 99.1063% ( 1) 00:16:33.288 2.662 - 2.675: 99.1121% ( 1) 00:16:33.288 2.675 - 2.688: 99.1180% ( 1) 00:16:33.288 2.714 - 2.726: 99.1238% ( 1) 00:16:33.288 2.726 - 2.739: 99.1297% ( 1) 00:16:33.288 2.765 - 2.778: 99.1355% ( 1) 00:16:33.288 2.778 - 2.790: 99.1414% ( 1) 00:16:33.288 2.790 - 2.803: 99.1589% ( 3) 00:16:33.288 2.803 - 2.816: 99.1647% ( 1) 00:16:33.288 2.842 - 2.854: 99.1706% ( 1) 00:16:33.288 2.867 - 2.880: 99.1764% ( 1) 00:16:33.288 2.906 - 2.918: 99.1822% ( 1) 00:16:33.288 2.918 - 2.931: 99.1881% ( 1) 00:16:33.288 3.008 - 3.021: 99.1939% ( 1) 00:16:33.288 3.021 - 3.034: 99.1998% ( 1) 00:16:33.288 3.110 - 3.123: 99.2114% ( 2) 00:16:33.288 3.123 - 3.136: 99.2173% ( 1) 00:16:33.288 3.174 - 3.187: 99.2231% ( 1) 00:16:33.288 3.264 - 3.277: 99.2290% ( 1) 00:16:33.288 3.277 - 3.302: 99.2348% ( 1) 00:16:33.288 3.302 - 3.328: 99.2407% ( 1) 00:16:33.288 3.405 - 3.430: 99.2465% ( 1) 00:16:33.288 3.584 - 3.610: 99.2523% ( 1) 00:16:33.288 3.610 - 3.635: 99.2640% ( 2) 00:16:33.288 3.712 - 3.738: 99.2699% ( 1) 00:16:33.288 3.840 - 3.866: 99.2815% ( 2) 00:16:33.288 3.891 - 3.917: 99.2932% ( 2) 00:16:33.288 3.917 - 3.942: 99.2991% ( 1) 00:16:33.288 4.019 - 4.045: 99.3049% ( 1) 00:16:33.288 4.045 - 4.070: 99.3107% ( 1) 00:16:33.288 4.096 - 4.122: 99.3166% ( 1) 00:16:33.288 4.173 - 4.198: 99.3224% ( 1) 00:16:33.288 4.557 - 4.582: 99.3283% ( 1) 00:16:33.288 4.710 - 4.736: 99.3400% ( 2) 00:16:33.288 4.787 - 4.813: 99.3458% ( 1) 00:16:33.288 4.915 - 4.941: 99.3516% ( 1) 00:16:33.288 5.171 - 5.197: 99.3575% ( 1) 00:16:33.288 5.299 - 5.325: 99.3692% ( 2) 00:16:33.288 5.427 - 5.453: 99.3750% ( 1) 00:16:33.288 5.555 - 5.581: 99.3808% ( 1) 00:16:33.288 5.581 - 5.606: 99.3867% ( 1) 00:16:33.288 5.658 - 5.683: 99.3925% ( 1) 00:16:33.288 5.786 - 5.811: 99.3984% ( 1) 00:16:33.288 5.862 - 5.888: 99.4042% ( 1) 00:16:33.288 5.914 - 5.939: 99.4100% ( 1) 00:16:33.288 5.965 - 5.990: 99.4217% ( 2) 00:16:33.288 6.016 - 6.042: 99.4276% ( 1) 00:16:33.288 6.042 - 6.067: 99.4334% ( 1) 00:16:33.288 6.118 - 6.144: 99.4393% ( 1) 00:16:33.288 6.170 - 6.195: 99.4509% ( 2) 00:16:33.288 6.323 - 6.349: 99.4568% ( 1) 00:16:33.288 6.349 - 6.374: 99.4685% ( 2) 00:16:33.288 6.605 - 6.656: 99.4801% ( 2) 00:16:33.288 6.656 - 6.707: 99.4918% ( 2) 00:16:33.288 7.117 - 7.168: 99.4977% ( 1) 00:16:33.288 7.168 - 7.219: 99.5035% ( 1) 00:16:33.288 7.219 - 7.270: 99.5093% ( 1) 00:16:33.288 7.526 - 7.578: 99.5152% ( 1) 00:16:33.288 7.782 - 7.834: 99.5210% ( 1) 00:16:33.288 7.936 - 7.987: 99.5269% ( 1) 00:16:33.288 8.141 - 8.192: 9[2024-04-26 08:50:50.432413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:33.288 9.5327% ( 1) 00:16:33.288 8.294 - 8.346: 99.5386% ( 1) 00:16:33.288 8.448 - 8.499: 99.5444% ( 1) 00:16:33.288 8.704 - 8.755: 99.5502% ( 1) 00:16:33.288 9.779 - 9.830: 99.5561% ( 1) 00:16:33.288 10.189 - 10.240: 99.5619% ( 1) 00:16:33.288 13.619 - 13.722: 99.5678% ( 1) 00:16:33.288 15.770 - 15.872: 99.5736% ( 1) 00:16:33.288 17.715 - 17.818: 99.5794% ( 1) 00:16:33.288 3984.589 - 4010.803: 99.9942% ( 71) 00:16:33.288 7025.459 - 7077.888: 100.0000% ( 1) 00:16:33.288 00:16:33.288 08:50:50 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:33.288 08:50:50 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:33.288 08:50:50 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:33.288 08:50:50 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:33.288 08:50:50 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:33.548 [ 00:16:33.548 { 00:16:33.548 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:33.548 "subtype": "Discovery", 00:16:33.548 "listen_addresses": [], 00:16:33.548 "allow_any_host": true, 00:16:33.548 "hosts": [] 00:16:33.548 }, 00:16:33.548 { 00:16:33.548 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:33.548 "subtype": "NVMe", 00:16:33.548 "listen_addresses": [ 00:16:33.548 { 00:16:33.548 "transport": "VFIOUSER", 00:16:33.548 "trtype": "VFIOUSER", 00:16:33.548 "adrfam": "IPv4", 00:16:33.548 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:33.548 "trsvcid": "0" 00:16:33.548 } 00:16:33.548 ], 00:16:33.548 "allow_any_host": true, 00:16:33.548 "hosts": [], 00:16:33.548 "serial_number": "SPDK1", 00:16:33.548 "model_number": "SPDK bdev Controller", 00:16:33.548 "max_namespaces": 32, 00:16:33.548 "min_cntlid": 1, 00:16:33.548 "max_cntlid": 65519, 00:16:33.548 "namespaces": [ 00:16:33.548 { 00:16:33.548 "nsid": 1, 00:16:33.548 "bdev_name": "Malloc1", 00:16:33.548 "name": "Malloc1", 00:16:33.548 "nguid": "76404C9C451647DB9270CBEDFDE15D5A", 00:16:33.548 "uuid": "76404c9c-4516-47db-9270-cbedfde15d5a" 00:16:33.548 }, 00:16:33.548 { 00:16:33.548 "nsid": 2, 00:16:33.548 "bdev_name": "Malloc3", 00:16:33.548 "name": "Malloc3", 00:16:33.548 "nguid": "7DE9F1FF71A749D1AD2C3F15D02A56EF", 00:16:33.548 "uuid": "7de9f1ff-71a7-49d1-ad2c-3f15d02a56ef" 00:16:33.548 } 00:16:33.548 ] 00:16:33.548 }, 00:16:33.548 { 00:16:33.548 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:33.548 "subtype": "NVMe", 00:16:33.548 "listen_addresses": [ 00:16:33.548 { 00:16:33.548 "transport": "VFIOUSER", 00:16:33.548 "trtype": "VFIOUSER", 00:16:33.548 "adrfam": "IPv4", 00:16:33.548 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:33.548 "trsvcid": "0" 00:16:33.548 } 00:16:33.548 ], 00:16:33.548 "allow_any_host": true, 00:16:33.548 "hosts": [], 00:16:33.548 "serial_number": "SPDK2", 00:16:33.548 "model_number": "SPDK bdev Controller", 00:16:33.548 "max_namespaces": 32, 00:16:33.548 "min_cntlid": 1, 00:16:33.548 "max_cntlid": 65519, 00:16:33.548 "namespaces": [ 00:16:33.548 { 00:16:33.548 "nsid": 1, 00:16:33.548 "bdev_name": "Malloc2", 00:16:33.548 "name": "Malloc2", 00:16:33.548 "nguid": "5C77D7D52A6443C99C282CDA3C8F744F", 00:16:33.548 "uuid": "5c77d7d5-2a64-43c9-9c28-2cda3c8f744f" 00:16:33.548 } 00:16:33.548 ] 00:16:33.548 } 00:16:33.548 ] 00:16:33.548 08:50:50 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:33.548 08:50:50 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:33.548 08:50:50 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2028470 00:16:33.548 08:50:50 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:33.548 08:50:50 -- common/autotest_common.sh@1251 -- # local i=0 00:16:33.548 08:50:50 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:33.548 08:50:50 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:33.548 08:50:50 -- common/autotest_common.sh@1262 -- # return 0 00:16:33.548 08:50:50 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:33.548 08:50:50 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:33.548 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.808 [2024-04-26 08:50:50.823843] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:33.808 Malloc4 00:16:33.808 08:50:50 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:33.808 [2024-04-26 08:50:51.010245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:33.808 08:50:51 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:33.808 Asynchronous Event Request test 00:16:33.808 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:33.808 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:33.808 Registering asynchronous event callbacks... 00:16:33.808 Starting namespace attribute notice tests for all controllers... 00:16:33.808 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:33.808 aer_cb - Changed Namespace 00:16:33.808 Cleaning up... 00:16:34.067 [ 00:16:34.067 { 00:16:34.067 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:34.067 "subtype": "Discovery", 00:16:34.067 "listen_addresses": [], 00:16:34.067 "allow_any_host": true, 00:16:34.067 "hosts": [] 00:16:34.067 }, 00:16:34.067 { 00:16:34.067 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:34.067 "subtype": "NVMe", 00:16:34.067 "listen_addresses": [ 00:16:34.067 { 00:16:34.067 "transport": "VFIOUSER", 00:16:34.067 "trtype": "VFIOUSER", 00:16:34.067 "adrfam": "IPv4", 00:16:34.067 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:34.067 "trsvcid": "0" 00:16:34.067 } 00:16:34.067 ], 00:16:34.067 "allow_any_host": true, 00:16:34.067 "hosts": [], 00:16:34.067 "serial_number": "SPDK1", 00:16:34.067 "model_number": "SPDK bdev Controller", 00:16:34.067 "max_namespaces": 32, 00:16:34.067 "min_cntlid": 1, 00:16:34.067 "max_cntlid": 65519, 00:16:34.067 "namespaces": [ 00:16:34.067 { 00:16:34.067 "nsid": 1, 00:16:34.067 "bdev_name": "Malloc1", 00:16:34.067 "name": "Malloc1", 00:16:34.067 "nguid": "76404C9C451647DB9270CBEDFDE15D5A", 00:16:34.067 "uuid": "76404c9c-4516-47db-9270-cbedfde15d5a" 00:16:34.067 }, 00:16:34.067 { 00:16:34.067 "nsid": 2, 00:16:34.067 "bdev_name": "Malloc3", 00:16:34.067 "name": "Malloc3", 00:16:34.067 "nguid": "7DE9F1FF71A749D1AD2C3F15D02A56EF", 00:16:34.067 "uuid": "7de9f1ff-71a7-49d1-ad2c-3f15d02a56ef" 00:16:34.067 } 00:16:34.067 ] 00:16:34.067 }, 00:16:34.067 { 00:16:34.067 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:34.067 "subtype": "NVMe", 00:16:34.067 "listen_addresses": [ 00:16:34.067 { 00:16:34.067 "transport": "VFIOUSER", 00:16:34.067 "trtype": "VFIOUSER", 00:16:34.067 "adrfam": "IPv4", 00:16:34.067 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:34.067 "trsvcid": "0" 00:16:34.067 } 00:16:34.067 ], 00:16:34.067 "allow_any_host": true, 00:16:34.067 "hosts": [], 00:16:34.067 "serial_number": "SPDK2", 00:16:34.068 "model_number": "SPDK bdev Controller", 00:16:34.068 "max_namespaces": 32, 00:16:34.068 "min_cntlid": 1, 00:16:34.068 "max_cntlid": 65519, 00:16:34.068 "namespaces": [ 00:16:34.068 { 00:16:34.068 "nsid": 1, 00:16:34.068 "bdev_name": "Malloc2", 00:16:34.068 "name": "Malloc2", 00:16:34.068 "nguid": "5C77D7D52A6443C99C282CDA3C8F744F", 00:16:34.068 "uuid": "5c77d7d5-2a64-43c9-9c28-2cda3c8f744f" 00:16:34.068 }, 00:16:34.068 { 00:16:34.068 "nsid": 2, 00:16:34.068 "bdev_name": "Malloc4", 00:16:34.068 "name": "Malloc4", 00:16:34.068 "nguid": "7BC0FD2F89D14D918B00EBDDCE82CEB3", 00:16:34.068 "uuid": "7bc0fd2f-89d1-4d91-8b00-ebddce82ceb3" 00:16:34.068 } 00:16:34.068 ] 00:16:34.068 } 00:16:34.068 ] 00:16:34.068 08:50:51 -- target/nvmf_vfio_user.sh@44 -- # wait 2028470 00:16:34.068 08:50:51 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:34.068 08:50:51 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2020447 00:16:34.068 08:50:51 -- common/autotest_common.sh@936 -- # '[' -z 2020447 ']' 00:16:34.068 08:50:51 -- common/autotest_common.sh@940 -- # kill -0 2020447 00:16:34.068 08:50:51 -- common/autotest_common.sh@941 -- # uname 00:16:34.068 08:50:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.068 08:50:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2020447 00:16:34.068 08:50:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:34.068 08:50:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:34.068 08:50:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2020447' 00:16:34.068 killing process with pid 2020447 00:16:34.068 08:50:51 -- common/autotest_common.sh@955 -- # kill 2020447 00:16:34.068 [2024-04-26 08:50:51.279473] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:34.068 08:50:51 -- common/autotest_common.sh@960 -- # wait 2020447 00:16:34.327 08:50:51 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:34.328 08:50:51 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:34.328 08:50:51 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:34.328 08:50:51 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:34.328 08:50:51 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:34.328 08:50:51 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2028739 00:16:34.328 08:50:51 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2028739' 00:16:34.328 Process pid: 2028739 00:16:34.328 08:50:51 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:34.328 08:50:51 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:34.328 08:50:51 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2028739 00:16:34.328 08:50:51 -- common/autotest_common.sh@817 -- # '[' -z 2028739 ']' 00:16:34.328 08:50:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.328 08:50:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:34.328 08:50:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.328 08:50:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:34.328 08:50:51 -- common/autotest_common.sh@10 -- # set +x 00:16:34.587 [2024-04-26 08:50:51.612184] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:34.587 [2024-04-26 08:50:51.613056] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:16:34.587 [2024-04-26 08:50:51.613094] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.587 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.587 [2024-04-26 08:50:51.682704] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.587 [2024-04-26 08:50:51.754395] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.587 [2024-04-26 08:50:51.754434] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.587 [2024-04-26 08:50:51.754443] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.587 [2024-04-26 08:50:51.754476] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.587 [2024-04-26 08:50:51.754484] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.587 [2024-04-26 08:50:51.754533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.587 [2024-04-26 08:50:51.754624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.587 [2024-04-26 08:50:51.754711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.587 [2024-04-26 08:50:51.754713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.587 [2024-04-26 08:50:51.827832] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:34.587 [2024-04-26 08:50:51.827954] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:34.587 [2024-04-26 08:50:51.828129] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:34.587 [2024-04-26 08:50:51.828582] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:34.587 [2024-04-26 08:50:51.828682] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:35.534 08:50:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:35.534 08:50:52 -- common/autotest_common.sh@850 -- # return 0 00:16:35.534 08:50:52 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:36.469 08:50:53 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:36.469 08:50:53 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:36.469 08:50:53 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:36.469 08:50:53 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:36.469 08:50:53 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:36.469 08:50:53 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:36.728 Malloc1 00:16:36.728 08:50:53 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:36.728 08:50:53 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:36.987 08:50:54 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:37.245 08:50:54 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:37.245 08:50:54 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:37.245 08:50:54 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:37.504 Malloc2 00:16:37.504 08:50:54 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:37.504 08:50:54 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:37.763 08:50:54 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:38.022 08:50:55 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:38.022 08:50:55 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2028739 00:16:38.022 08:50:55 -- common/autotest_common.sh@936 -- # '[' -z 2028739 ']' 00:16:38.022 08:50:55 -- common/autotest_common.sh@940 -- # kill -0 2028739 00:16:38.022 08:50:55 -- common/autotest_common.sh@941 -- # uname 00:16:38.022 08:50:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.022 08:50:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2028739 00:16:38.022 08:50:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:38.022 08:50:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:38.022 08:50:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2028739' 00:16:38.022 killing process with pid 2028739 00:16:38.022 08:50:55 -- common/autotest_common.sh@955 -- # kill 2028739 00:16:38.022 08:50:55 -- common/autotest_common.sh@960 -- # wait 2028739 00:16:38.280 08:50:55 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:38.280 08:50:55 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:38.280 00:16:38.280 real 0m51.448s 00:16:38.280 user 3m22.376s 00:16:38.280 sys 0m4.687s 00:16:38.280 08:50:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:38.280 08:50:55 -- common/autotest_common.sh@10 -- # set +x 00:16:38.280 ************************************ 00:16:38.281 END TEST nvmf_vfio_user 00:16:38.281 ************************************ 00:16:38.281 08:50:55 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:38.281 08:50:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:38.281 08:50:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.281 08:50:55 -- common/autotest_common.sh@10 -- # set +x 00:16:38.539 ************************************ 00:16:38.539 START TEST nvmf_vfio_user_nvme_compliance 00:16:38.539 ************************************ 00:16:38.539 08:50:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:38.539 * Looking for test storage... 00:16:38.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:38.540 08:50:55 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.540 08:50:55 -- nvmf/common.sh@7 -- # uname -s 00:16:38.540 08:50:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.540 08:50:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.540 08:50:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.540 08:50:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.540 08:50:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.540 08:50:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.540 08:50:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.540 08:50:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.540 08:50:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.540 08:50:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.540 08:50:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:38.540 08:50:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:38.540 08:50:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.540 08:50:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.540 08:50:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.540 08:50:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.540 08:50:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.540 08:50:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.540 08:50:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.540 08:50:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.540 08:50:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.540 08:50:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.540 08:50:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.540 08:50:55 -- paths/export.sh@5 -- # export PATH 00:16:38.540 08:50:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.540 08:50:55 -- nvmf/common.sh@47 -- # : 0 00:16:38.540 08:50:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.540 08:50:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.540 08:50:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.540 08:50:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.540 08:50:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.540 08:50:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.540 08:50:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.540 08:50:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.540 08:50:55 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.540 08:50:55 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.540 08:50:55 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:38.540 08:50:55 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:38.540 08:50:55 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:38.540 08:50:55 -- compliance/compliance.sh@20 -- # nvmfpid=2029393 00:16:38.540 08:50:55 -- compliance/compliance.sh@21 -- # echo 'Process pid: 2029393' 00:16:38.540 Process pid: 2029393 00:16:38.540 08:50:55 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:38.540 08:50:55 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:38.540 08:50:55 -- compliance/compliance.sh@24 -- # waitforlisten 2029393 00:16:38.540 08:50:55 -- common/autotest_common.sh@817 -- # '[' -z 2029393 ']' 00:16:38.540 08:50:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.540 08:50:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:38.540 08:50:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.540 08:50:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:38.540 08:50:55 -- common/autotest_common.sh@10 -- # set +x 00:16:38.540 [2024-04-26 08:50:55.720284] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:16:38.540 [2024-04-26 08:50:55.720335] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.540 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.800 [2024-04-26 08:50:55.790101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:38.800 [2024-04-26 08:50:55.862463] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.800 [2024-04-26 08:50:55.862500] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.800 [2024-04-26 08:50:55.862511] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.800 [2024-04-26 08:50:55.862523] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.800 [2024-04-26 08:50:55.862530] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.800 [2024-04-26 08:50:55.862575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.800 [2024-04-26 08:50:55.862669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.800 [2024-04-26 08:50:55.862671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.368 08:50:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:39.368 08:50:56 -- common/autotest_common.sh@850 -- # return 0 00:16:39.368 08:50:56 -- compliance/compliance.sh@26 -- # sleep 1 00:16:40.305 08:50:57 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:40.305 08:50:57 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:40.305 08:50:57 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:40.305 08:50:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.305 08:50:57 -- common/autotest_common.sh@10 -- # set +x 00:16:40.305 08:50:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.305 08:50:57 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:40.305 08:50:57 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:40.305 08:50:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.305 08:50:57 -- common/autotest_common.sh@10 -- # set +x 00:16:40.564 malloc0 00:16:40.564 08:50:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.564 08:50:57 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:40.564 08:50:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.564 08:50:57 -- common/autotest_common.sh@10 -- # set +x 00:16:40.564 08:50:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.564 08:50:57 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:40.564 08:50:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.564 08:50:57 -- common/autotest_common.sh@10 -- # set +x 00:16:40.564 08:50:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.564 08:50:57 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:40.564 08:50:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.564 08:50:57 -- common/autotest_common.sh@10 -- # set +x 00:16:40.564 08:50:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.564 08:50:57 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:40.564 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.564 00:16:40.564 00:16:40.564 CUnit - A unit testing framework for C - Version 2.1-3 00:16:40.564 http://cunit.sourceforge.net/ 00:16:40.564 00:16:40.564 00:16:40.564 Suite: nvme_compliance 00:16:40.564 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-26 08:50:57.786245] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.564 [2024-04-26 08:50:57.787569] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:40.564 [2024-04-26 08:50:57.787583] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:40.564 [2024-04-26 08:50:57.787592] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:40.564 [2024-04-26 08:50:57.791276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.823 passed 00:16:40.823 Test: admin_identify_ctrlr_verify_fused ...[2024-04-26 08:50:57.869846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.823 [2024-04-26 08:50:57.872861] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.823 passed 00:16:40.823 Test: admin_identify_ns ...[2024-04-26 08:50:57.951616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:40.823 [2024-04-26 08:50:58.012461] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:40.823 [2024-04-26 08:50:58.020463] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:40.823 [2024-04-26 08:50:58.041566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:40.823 passed 00:16:41.082 Test: admin_get_features_mandatory_features ...[2024-04-26 08:50:58.118955] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.082 [2024-04-26 08:50:58.123984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.082 passed 00:16:41.083 Test: admin_get_features_optional_features ...[2024-04-26 08:50:58.202495] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.083 [2024-04-26 08:50:58.205516] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.083 passed 00:16:41.083 Test: admin_set_features_number_of_queues ...[2024-04-26 08:50:58.281013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.341 [2024-04-26 08:50:58.386552] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.341 passed 00:16:41.341 Test: admin_get_log_page_mandatory_logs ...[2024-04-26 08:50:58.463987] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.341 [2024-04-26 08:50:58.467012] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.341 passed 00:16:41.341 Test: admin_get_log_page_with_lpo ...[2024-04-26 08:50:58.546033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.600 [2024-04-26 08:50:58.617459] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:41.600 [2024-04-26 08:50:58.630536] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.600 passed 00:16:41.600 Test: fabric_property_get ...[2024-04-26 08:50:58.708067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.600 [2024-04-26 08:50:58.709289] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:41.600 [2024-04-26 08:50:58.711087] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.600 passed 00:16:41.600 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-26 08:50:58.790570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.600 [2024-04-26 08:50:58.791792] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:41.600 [2024-04-26 08:50:58.793589] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.600 passed 00:16:41.859 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-26 08:50:58.867622] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.859 [2024-04-26 08:50:58.952458] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:41.859 [2024-04-26 08:50:58.968459] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:41.859 [2024-04-26 08:50:58.973547] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.859 passed 00:16:41.859 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-26 08:50:59.051063] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:41.859 [2024-04-26 08:50:59.052285] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:41.859 [2024-04-26 08:50:59.054082] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:41.859 passed 00:16:42.118 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-26 08:50:59.130638] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.118 [2024-04-26 08:50:59.207458] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:42.118 [2024-04-26 08:50:59.231462] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:42.118 [2024-04-26 08:50:59.236550] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.118 passed 00:16:42.118 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-26 08:50:59.310002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.118 [2024-04-26 08:50:59.311234] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:42.118 [2024-04-26 08:50:59.311259] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:42.118 [2024-04-26 08:50:59.313026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.118 passed 00:16:42.416 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-26 08:50:59.388573] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.416 [2024-04-26 08:50:59.482474] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:42.416 [2024-04-26 08:50:59.490466] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:42.416 [2024-04-26 08:50:59.498467] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:42.416 [2024-04-26 08:50:59.506458] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:42.416 [2024-04-26 08:50:59.535543] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.416 passed 00:16:42.416 Test: admin_create_io_sq_verify_pc ...[2024-04-26 08:50:59.608971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:42.416 [2024-04-26 08:50:59.624465] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:42.416 [2024-04-26 08:50:59.642150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:42.675 passed 00:16:42.675 Test: admin_create_io_qp_max_qps ...[2024-04-26 08:50:59.718656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:43.611 [2024-04-26 08:51:00.817463] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:44.180 [2024-04-26 08:51:01.201427] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.180 passed 00:16:44.180 Test: admin_create_io_sq_shared_cq ...[2024-04-26 08:51:01.280391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:44.180 [2024-04-26 08:51:01.407459] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:44.439 [2024-04-26 08:51:01.444521] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:44.439 passed 00:16:44.439 00:16:44.439 Run Summary: Type Total Ran Passed Failed Inactive 00:16:44.439 suites 1 1 n/a 0 0 00:16:44.439 tests 18 18 18 0 0 00:16:44.439 asserts 360 360 360 0 n/a 00:16:44.439 00:16:44.439 Elapsed time = 1.504 seconds 00:16:44.439 08:51:01 -- compliance/compliance.sh@42 -- # killprocess 2029393 00:16:44.439 08:51:01 -- common/autotest_common.sh@936 -- # '[' -z 2029393 ']' 00:16:44.439 08:51:01 -- common/autotest_common.sh@940 -- # kill -0 2029393 00:16:44.439 08:51:01 -- common/autotest_common.sh@941 -- # uname 00:16:44.439 08:51:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:44.439 08:51:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2029393 00:16:44.439 08:51:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:44.439 08:51:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:44.439 08:51:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2029393' 00:16:44.439 killing process with pid 2029393 00:16:44.439 08:51:01 -- common/autotest_common.sh@955 -- # kill 2029393 00:16:44.439 08:51:01 -- common/autotest_common.sh@960 -- # wait 2029393 00:16:44.699 08:51:01 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:44.699 08:51:01 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:44.699 00:16:44.699 real 0m6.239s 00:16:44.699 user 0m17.546s 00:16:44.699 sys 0m0.712s 00:16:44.699 08:51:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:44.699 08:51:01 -- common/autotest_common.sh@10 -- # set +x 00:16:44.699 ************************************ 00:16:44.699 END TEST nvmf_vfio_user_nvme_compliance 00:16:44.699 ************************************ 00:16:44.699 08:51:01 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:44.699 08:51:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:44.699 08:51:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:44.699 08:51:01 -- common/autotest_common.sh@10 -- # set +x 00:16:44.959 ************************************ 00:16:44.959 START TEST nvmf_vfio_user_fuzz 00:16:44.959 ************************************ 00:16:44.959 08:51:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:44.959 * Looking for test storage... 00:16:44.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:44.959 08:51:02 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.959 08:51:02 -- nvmf/common.sh@7 -- # uname -s 00:16:44.959 08:51:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.959 08:51:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.959 08:51:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.959 08:51:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.959 08:51:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.959 08:51:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.959 08:51:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.959 08:51:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.959 08:51:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.959 08:51:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.959 08:51:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:44.959 08:51:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:44.959 08:51:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.959 08:51:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.959 08:51:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.959 08:51:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.959 08:51:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:44.959 08:51:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.959 08:51:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.959 08:51:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.960 08:51:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.960 08:51:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.960 08:51:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.960 08:51:02 -- paths/export.sh@5 -- # export PATH 00:16:44.960 08:51:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.960 08:51:02 -- nvmf/common.sh@47 -- # : 0 00:16:44.960 08:51:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.960 08:51:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.960 08:51:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.960 08:51:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.960 08:51:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.960 08:51:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.960 08:51:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.960 08:51:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2030731 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2030731' 00:16:44.960 Process pid: 2030731 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:44.960 08:51:02 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2030731 00:16:44.960 08:51:02 -- common/autotest_common.sh@817 -- # '[' -z 2030731 ']' 00:16:44.960 08:51:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.960 08:51:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:44.960 08:51:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.960 08:51:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:44.960 08:51:02 -- common/autotest_common.sh@10 -- # set +x 00:16:45.897 08:51:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:45.897 08:51:02 -- common/autotest_common.sh@850 -- # return 0 00:16:45.897 08:51:02 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:46.834 08:51:03 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:46.834 08:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.834 08:51:03 -- common/autotest_common.sh@10 -- # set +x 00:16:46.834 08:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.834 08:51:03 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:46.834 08:51:03 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:46.834 08:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.834 08:51:03 -- common/autotest_common.sh@10 -- # set +x 00:16:46.834 malloc0 00:16:46.834 08:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.834 08:51:04 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:46.834 08:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.834 08:51:04 -- common/autotest_common.sh@10 -- # set +x 00:16:46.834 08:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.834 08:51:04 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:46.834 08:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.834 08:51:04 -- common/autotest_common.sh@10 -- # set +x 00:16:46.834 08:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.834 08:51:04 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:46.834 08:51:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:46.834 08:51:04 -- common/autotest_common.sh@10 -- # set +x 00:16:46.834 08:51:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:46.834 08:51:04 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:46.834 08:51:04 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:18.917 Fuzzing completed. Shutting down the fuzz application 00:17:18.917 00:17:18.917 Dumping successful admin opcodes: 00:17:18.917 8, 9, 10, 24, 00:17:18.917 Dumping successful io opcodes: 00:17:18.917 0, 00:17:18.917 NS: 0x200003a1ef00 I/O qp, Total commands completed: 892994, total successful commands: 3481, random_seed: 261111488 00:17:18.917 NS: 0x200003a1ef00 admin qp, Total commands completed: 216755, total successful commands: 1743, random_seed: 3631952768 00:17:18.917 08:51:34 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:18.917 08:51:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.917 08:51:34 -- common/autotest_common.sh@10 -- # set +x 00:17:18.917 08:51:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.917 08:51:34 -- target/vfio_user_fuzz.sh@46 -- # killprocess 2030731 00:17:18.917 08:51:34 -- common/autotest_common.sh@936 -- # '[' -z 2030731 ']' 00:17:18.917 08:51:34 -- common/autotest_common.sh@940 -- # kill -0 2030731 00:17:18.917 08:51:34 -- common/autotest_common.sh@941 -- # uname 00:17:18.917 08:51:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.917 08:51:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2030731 00:17:18.917 08:51:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:18.917 08:51:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:18.917 08:51:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2030731' 00:17:18.917 killing process with pid 2030731 00:17:18.917 08:51:34 -- common/autotest_common.sh@955 -- # kill 2030731 00:17:18.917 08:51:34 -- common/autotest_common.sh@960 -- # wait 2030731 00:17:18.917 08:51:34 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:18.917 08:51:34 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:18.917 00:17:18.917 real 0m32.865s 00:17:18.917 user 0m30.193s 00:17:18.917 sys 0m31.319s 00:17:18.917 08:51:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:18.917 08:51:34 -- common/autotest_common.sh@10 -- # set +x 00:17:18.917 ************************************ 00:17:18.917 END TEST nvmf_vfio_user_fuzz 00:17:18.917 ************************************ 00:17:18.917 08:51:34 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:18.917 08:51:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:18.917 08:51:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:18.917 08:51:34 -- common/autotest_common.sh@10 -- # set +x 00:17:18.917 ************************************ 00:17:18.917 START TEST nvmf_host_management 00:17:18.917 ************************************ 00:17:18.917 08:51:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:18.917 * Looking for test storage... 00:17:18.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.917 08:51:35 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.917 08:51:35 -- nvmf/common.sh@7 -- # uname -s 00:17:18.917 08:51:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.917 08:51:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.917 08:51:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.917 08:51:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.917 08:51:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.917 08:51:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.917 08:51:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.917 08:51:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.917 08:51:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.917 08:51:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.917 08:51:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:18.917 08:51:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:18.917 08:51:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.917 08:51:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.917 08:51:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.917 08:51:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.917 08:51:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.917 08:51:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.917 08:51:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.917 08:51:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.917 08:51:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.917 08:51:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.917 08:51:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.917 08:51:35 -- paths/export.sh@5 -- # export PATH 00:17:18.918 08:51:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.918 08:51:35 -- nvmf/common.sh@47 -- # : 0 00:17:18.918 08:51:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.918 08:51:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.918 08:51:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.918 08:51:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.918 08:51:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.918 08:51:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.918 08:51:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.918 08:51:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.918 08:51:35 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:18.918 08:51:35 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:18.918 08:51:35 -- target/host_management.sh@105 -- # nvmftestinit 00:17:18.918 08:51:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:18.918 08:51:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.918 08:51:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:18.918 08:51:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:18.918 08:51:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:18.918 08:51:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.918 08:51:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.918 08:51:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.918 08:51:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:18.918 08:51:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:18.918 08:51:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:18.918 08:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:25.537 08:51:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:25.537 08:51:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:25.537 08:51:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:25.537 08:51:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:25.537 08:51:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:25.537 08:51:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:25.537 08:51:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:25.537 08:51:41 -- nvmf/common.sh@295 -- # net_devs=() 00:17:25.537 08:51:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:25.537 08:51:41 -- nvmf/common.sh@296 -- # e810=() 00:17:25.537 08:51:41 -- nvmf/common.sh@296 -- # local -ga e810 00:17:25.537 08:51:41 -- nvmf/common.sh@297 -- # x722=() 00:17:25.537 08:51:41 -- nvmf/common.sh@297 -- # local -ga x722 00:17:25.537 08:51:41 -- nvmf/common.sh@298 -- # mlx=() 00:17:25.537 08:51:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:25.537 08:51:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.537 08:51:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.537 08:51:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.537 08:51:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.537 08:51:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.537 08:51:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.537 08:51:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.537 08:51:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.537 08:51:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.537 08:51:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.537 08:51:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.537 08:51:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:25.537 08:51:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:25.537 08:51:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:25.537 08:51:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.537 08:51:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:25.537 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:25.537 08:51:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.537 08:51:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:25.537 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:25.537 08:51:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:25.537 08:51:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.537 08:51:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.537 08:51:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:25.537 08:51:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.537 08:51:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:25.537 Found net devices under 0000:af:00.0: cvl_0_0 00:17:25.537 08:51:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.537 08:51:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.537 08:51:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.537 08:51:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:25.537 08:51:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.537 08:51:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:25.537 Found net devices under 0000:af:00.1: cvl_0_1 00:17:25.537 08:51:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.537 08:51:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:25.537 08:51:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:25.537 08:51:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:25.537 08:51:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:25.537 08:51:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.537 08:51:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.537 08:51:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:25.537 08:51:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:25.537 08:51:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:25.537 08:51:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:25.537 08:51:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:25.537 08:51:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:25.537 08:51:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.537 08:51:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:25.538 08:51:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:25.538 08:51:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:25.538 08:51:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:25.538 08:51:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:25.538 08:51:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:25.538 08:51:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:25.538 08:51:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.538 08:51:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.538 08:51:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.538 08:51:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:25.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:17:25.538 00:17:25.538 --- 10.0.0.2 ping statistics --- 00:17:25.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.538 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:17:25.538 08:51:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:17:25.538 00:17:25.538 --- 10.0.0.1 ping statistics --- 00:17:25.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.538 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:17:25.538 08:51:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.538 08:51:41 -- nvmf/common.sh@411 -- # return 0 00:17:25.538 08:51:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:25.538 08:51:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.538 08:51:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:25.538 08:51:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:25.538 08:51:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.538 08:51:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:25.538 08:51:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:25.538 08:51:41 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:17:25.538 08:51:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:25.538 08:51:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:25.538 08:51:41 -- common/autotest_common.sh@10 -- # set +x 00:17:25.538 ************************************ 00:17:25.538 START TEST nvmf_host_management 00:17:25.538 ************************************ 00:17:25.538 08:51:42 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:17:25.538 08:51:42 -- target/host_management.sh@69 -- # starttarget 00:17:25.538 08:51:42 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:25.538 08:51:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:25.538 08:51:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:25.538 08:51:42 -- common/autotest_common.sh@10 -- # set +x 00:17:25.538 08:51:42 -- nvmf/common.sh@470 -- # nvmfpid=2040025 00:17:25.538 08:51:42 -- nvmf/common.sh@471 -- # waitforlisten 2040025 00:17:25.538 08:51:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:25.538 08:51:42 -- common/autotest_common.sh@817 -- # '[' -z 2040025 ']' 00:17:25.538 08:51:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.538 08:51:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:25.538 08:51:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.538 08:51:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:25.538 08:51:42 -- common/autotest_common.sh@10 -- # set +x 00:17:25.538 [2024-04-26 08:51:42.120905] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:17:25.538 [2024-04-26 08:51:42.120951] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.538 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.538 [2024-04-26 08:51:42.195956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:25.538 [2024-04-26 08:51:42.269444] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.538 [2024-04-26 08:51:42.269486] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.538 [2024-04-26 08:51:42.269495] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.538 [2024-04-26 08:51:42.269504] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.538 [2024-04-26 08:51:42.269511] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.538 [2024-04-26 08:51:42.269608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.538 [2024-04-26 08:51:42.269693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.538 [2024-04-26 08:51:42.269802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.538 [2024-04-26 08:51:42.269804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:25.797 08:51:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:25.797 08:51:42 -- common/autotest_common.sh@850 -- # return 0 00:17:25.797 08:51:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:25.797 08:51:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:25.797 08:51:42 -- common/autotest_common.sh@10 -- # set +x 00:17:25.797 08:51:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.797 08:51:42 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:25.797 08:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.797 08:51:42 -- common/autotest_common.sh@10 -- # set +x 00:17:25.797 [2024-04-26 08:51:42.968118] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.797 08:51:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.797 08:51:42 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:25.797 08:51:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:25.797 08:51:42 -- common/autotest_common.sh@10 -- # set +x 00:17:25.797 08:51:42 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:25.797 08:51:42 -- target/host_management.sh@23 -- # cat 00:17:25.797 08:51:42 -- target/host_management.sh@30 -- # rpc_cmd 00:17:25.797 08:51:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.797 08:51:42 -- common/autotest_common.sh@10 -- # set +x 00:17:25.797 Malloc0 00:17:25.797 [2024-04-26 08:51:43.035022] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.057 08:51:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.057 08:51:43 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:26.057 08:51:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:26.057 08:51:43 -- common/autotest_common.sh@10 -- # set +x 00:17:26.057 08:51:43 -- target/host_management.sh@73 -- # perfpid=2040330 00:17:26.057 08:51:43 -- target/host_management.sh@74 -- # waitforlisten 2040330 /var/tmp/bdevperf.sock 00:17:26.057 08:51:43 -- common/autotest_common.sh@817 -- # '[' -z 2040330 ']' 00:17:26.057 08:51:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.057 08:51:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:26.057 08:51:43 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:26.057 08:51:43 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:26.057 08:51:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.057 08:51:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:26.057 08:51:43 -- nvmf/common.sh@521 -- # config=() 00:17:26.057 08:51:43 -- common/autotest_common.sh@10 -- # set +x 00:17:26.057 08:51:43 -- nvmf/common.sh@521 -- # local subsystem config 00:17:26.057 08:51:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:26.057 08:51:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:26.057 { 00:17:26.057 "params": { 00:17:26.057 "name": "Nvme$subsystem", 00:17:26.057 "trtype": "$TEST_TRANSPORT", 00:17:26.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.057 "adrfam": "ipv4", 00:17:26.057 "trsvcid": "$NVMF_PORT", 00:17:26.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.057 "hdgst": ${hdgst:-false}, 00:17:26.057 "ddgst": ${ddgst:-false} 00:17:26.057 }, 00:17:26.057 "method": "bdev_nvme_attach_controller" 00:17:26.057 } 00:17:26.057 EOF 00:17:26.057 )") 00:17:26.057 08:51:43 -- nvmf/common.sh@543 -- # cat 00:17:26.057 08:51:43 -- nvmf/common.sh@545 -- # jq . 00:17:26.057 08:51:43 -- nvmf/common.sh@546 -- # IFS=, 00:17:26.057 08:51:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:26.057 "params": { 00:17:26.057 "name": "Nvme0", 00:17:26.057 "trtype": "tcp", 00:17:26.057 "traddr": "10.0.0.2", 00:17:26.057 "adrfam": "ipv4", 00:17:26.057 "trsvcid": "4420", 00:17:26.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:26.057 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:26.057 "hdgst": false, 00:17:26.057 "ddgst": false 00:17:26.057 }, 00:17:26.057 "method": "bdev_nvme_attach_controller" 00:17:26.057 }' 00:17:26.057 [2024-04-26 08:51:43.134942] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:17:26.057 [2024-04-26 08:51:43.134991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040330 ] 00:17:26.057 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.057 [2024-04-26 08:51:43.205567] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.057 [2024-04-26 08:51:43.272042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.316 Running I/O for 10 seconds... 00:17:26.886 08:51:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:26.886 08:51:43 -- common/autotest_common.sh@850 -- # return 0 00:17:26.886 08:51:43 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:26.886 08:51:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.886 08:51:43 -- common/autotest_common.sh@10 -- # set +x 00:17:26.886 08:51:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.886 08:51:43 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:26.886 08:51:43 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:26.886 08:51:43 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:26.886 08:51:43 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:26.886 08:51:43 -- target/host_management.sh@52 -- # local ret=1 00:17:26.886 08:51:43 -- target/host_management.sh@53 -- # local i 00:17:26.886 08:51:43 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:26.886 08:51:43 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:26.886 08:51:43 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:26.886 08:51:43 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:26.886 08:51:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.886 08:51:43 -- common/autotest_common.sh@10 -- # set +x 00:17:26.886 08:51:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.886 08:51:44 -- target/host_management.sh@55 -- # read_io_count=578 00:17:26.886 08:51:44 -- target/host_management.sh@58 -- # '[' 578 -ge 100 ']' 00:17:26.886 08:51:44 -- target/host_management.sh@59 -- # ret=0 00:17:26.886 08:51:44 -- target/host_management.sh@60 -- # break 00:17:26.886 08:51:44 -- target/host_management.sh@64 -- # return 0 00:17:26.886 08:51:44 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:26.886 08:51:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.886 08:51:44 -- common/autotest_common.sh@10 -- # set +x 00:17:26.886 [2024-04-26 08:51:44.026192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 [2024-04-26 08:51:44.026237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 [2024-04-26 08:51:44.026247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 [2024-04-26 08:51:44.026256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 [2024-04-26 08:51:44.026265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 [2024-04-26 08:51:44.026274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 [2024-04-26 08:51:44.026283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 [2024-04-26 08:51:44.026291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 [2024-04-26 08:51:44.026299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 [2024-04-26 08:51:44.026307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 [2024-04-26 08:51:44.026316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 [2024-04-26 08:51:44.026324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97e690 is same with the state(5) to be set 00:17:26.886 08:51:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.886 08:51:44 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:26.886 08:51:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.886 08:51:44 -- common/autotest_common.sh@10 -- # set +x 00:17:26.886 [2024-04-26 08:51:44.033755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.886 [2024-04-26 08:51:44.033790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.033802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.887 [2024-04-26 08:51:44.033812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.033822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.887 [2024-04-26 08:51:44.033832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.033843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.887 [2024-04-26 08:51:44.033852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.033862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164ab80 is same with the state(5) to be set 00:17:26.887 [2024-04-26 08:51:44.034551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.034980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.034991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.887 [2024-04-26 08:51:44.035328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.887 [2024-04-26 08:51:44.035339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.035954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:26.888 [2024-04-26 08:51:44.035964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.888 [2024-04-26 08:51:44.036032] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a5b8a0 was disconnected and freed. reset controller. 00:17:26.888 [2024-04-26 08:51:44.036883] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:26.888 task offset: 81920 on job bdev=Nvme0n1 fails 00:17:26.888 00:17:26.888 Latency(us) 00:17:26.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.888 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:26.888 Job: Nvme0n1 ended in about 0.60 seconds with error 00:17:26.888 Verification LBA range: start 0x0 length 0x400 00:17:26.888 Nvme0n1 : 0.60 1058.99 66.19 105.90 0.00 54034.26 1782.58 60397.98 00:17:26.888 =================================================================================================================== 00:17:26.888 Total : 1058.99 66.19 105.90 0.00 54034.26 1782.58 60397.98 00:17:26.888 [2024-04-26 08:51:44.038400] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:26.888 [2024-04-26 08:51:44.038418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164ab80 (9): Bad file descriptor 00:17:26.888 08:51:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.888 08:51:44 -- target/host_management.sh@87 -- # sleep 1 00:17:26.888 [2024-04-26 08:51:44.092599] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:27.824 08:51:45 -- target/host_management.sh@91 -- # kill -9 2040330 00:17:27.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2040330) - No such process 00:17:27.824 08:51:45 -- target/host_management.sh@91 -- # true 00:17:27.824 08:51:45 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:27.824 08:51:45 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:27.824 08:51:45 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:27.824 08:51:45 -- nvmf/common.sh@521 -- # config=() 00:17:27.824 08:51:45 -- nvmf/common.sh@521 -- # local subsystem config 00:17:27.824 08:51:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:27.824 08:51:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:27.824 { 00:17:27.824 "params": { 00:17:27.824 "name": "Nvme$subsystem", 00:17:27.824 "trtype": "$TEST_TRANSPORT", 00:17:27.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:27.824 "adrfam": "ipv4", 00:17:27.824 "trsvcid": "$NVMF_PORT", 00:17:27.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:27.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:27.824 "hdgst": ${hdgst:-false}, 00:17:27.824 "ddgst": ${ddgst:-false} 00:17:27.824 }, 00:17:27.824 "method": "bdev_nvme_attach_controller" 00:17:27.824 } 00:17:27.824 EOF 00:17:27.824 )") 00:17:27.824 08:51:45 -- nvmf/common.sh@543 -- # cat 00:17:27.824 08:51:45 -- nvmf/common.sh@545 -- # jq . 00:17:27.824 08:51:45 -- nvmf/common.sh@546 -- # IFS=, 00:17:27.824 08:51:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:27.824 "params": { 00:17:27.824 "name": "Nvme0", 00:17:27.824 "trtype": "tcp", 00:17:27.824 "traddr": "10.0.0.2", 00:17:27.824 "adrfam": "ipv4", 00:17:27.824 "trsvcid": "4420", 00:17:27.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:27.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:27.824 "hdgst": false, 00:17:27.824 "ddgst": false 00:17:27.824 }, 00:17:27.824 "method": "bdev_nvme_attach_controller" 00:17:27.824 }' 00:17:28.082 [2024-04-26 08:51:45.095077] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:17:28.082 [2024-04-26 08:51:45.095130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040611 ] 00:17:28.082 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.082 [2024-04-26 08:51:45.167135] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.082 [2024-04-26 08:51:45.235262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.341 Running I/O for 1 seconds... 00:17:29.277 00:17:29.277 Latency(us) 00:17:29.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.277 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:29.277 Verification LBA range: start 0x0 length 0x400 00:17:29.277 Nvme0n1 : 1.09 1117.73 69.86 0.00 0.00 54465.78 13264.49 61236.84 00:17:29.277 =================================================================================================================== 00:17:29.277 Total : 1117.73 69.86 0.00 0.00 54465.78 13264.49 61236.84 00:17:29.536 08:51:46 -- target/host_management.sh@102 -- # stoptarget 00:17:29.536 08:51:46 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:29.536 08:51:46 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:29.536 08:51:46 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:29.536 08:51:46 -- target/host_management.sh@40 -- # nvmftestfini 00:17:29.536 08:51:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:29.536 08:51:46 -- nvmf/common.sh@117 -- # sync 00:17:29.536 08:51:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:29.536 08:51:46 -- nvmf/common.sh@120 -- # set +e 00:17:29.536 08:51:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:29.536 08:51:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:29.536 rmmod nvme_tcp 00:17:29.536 rmmod nvme_fabrics 00:17:29.536 rmmod nvme_keyring 00:17:29.536 08:51:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:29.536 08:51:46 -- nvmf/common.sh@124 -- # set -e 00:17:29.536 08:51:46 -- nvmf/common.sh@125 -- # return 0 00:17:29.536 08:51:46 -- nvmf/common.sh@478 -- # '[' -n 2040025 ']' 00:17:29.536 08:51:46 -- nvmf/common.sh@479 -- # killprocess 2040025 00:17:29.536 08:51:46 -- common/autotest_common.sh@936 -- # '[' -z 2040025 ']' 00:17:29.536 08:51:46 -- common/autotest_common.sh@940 -- # kill -0 2040025 00:17:29.536 08:51:46 -- common/autotest_common.sh@941 -- # uname 00:17:29.536 08:51:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.536 08:51:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2040025 00:17:29.794 08:51:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:29.794 08:51:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:29.794 08:51:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2040025' 00:17:29.794 killing process with pid 2040025 00:17:29.794 08:51:46 -- common/autotest_common.sh@955 -- # kill 2040025 00:17:29.794 08:51:46 -- common/autotest_common.sh@960 -- # wait 2040025 00:17:29.794 [2024-04-26 08:51:47.026935] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:30.054 08:51:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:30.054 08:51:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:30.054 08:51:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:30.054 08:51:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.054 08:51:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.054 08:51:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.054 08:51:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.054 08:51:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.962 08:51:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:31.962 00:17:31.962 real 0m7.067s 00:17:31.962 user 0m21.394s 00:17:31.962 sys 0m1.410s 00:17:31.962 08:51:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:31.962 08:51:49 -- common/autotest_common.sh@10 -- # set +x 00:17:31.962 ************************************ 00:17:31.962 END TEST nvmf_host_management 00:17:31.962 ************************************ 00:17:31.962 08:51:49 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:31.962 00:17:31.962 real 0m14.148s 00:17:31.962 user 0m23.317s 00:17:31.962 sys 0m6.609s 00:17:31.962 08:51:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:31.962 08:51:49 -- common/autotest_common.sh@10 -- # set +x 00:17:31.962 ************************************ 00:17:31.962 END TEST nvmf_host_management 00:17:31.963 ************************************ 00:17:32.221 08:51:49 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:32.221 08:51:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:32.221 08:51:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.221 08:51:49 -- common/autotest_common.sh@10 -- # set +x 00:17:32.221 ************************************ 00:17:32.221 START TEST nvmf_lvol 00:17:32.221 ************************************ 00:17:32.221 08:51:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:32.481 * Looking for test storage... 00:17:32.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:32.481 08:51:49 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:32.481 08:51:49 -- nvmf/common.sh@7 -- # uname -s 00:17:32.481 08:51:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.481 08:51:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.481 08:51:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.481 08:51:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.481 08:51:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.481 08:51:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.481 08:51:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.481 08:51:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.481 08:51:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.481 08:51:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.481 08:51:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:32.481 08:51:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:32.481 08:51:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.481 08:51:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.481 08:51:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:32.481 08:51:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.481 08:51:49 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:32.481 08:51:49 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.481 08:51:49 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.481 08:51:49 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.481 08:51:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.481 08:51:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.481 08:51:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.481 08:51:49 -- paths/export.sh@5 -- # export PATH 00:17:32.481 08:51:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.481 08:51:49 -- nvmf/common.sh@47 -- # : 0 00:17:32.481 08:51:49 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:32.481 08:51:49 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:32.481 08:51:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.481 08:51:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.481 08:51:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.481 08:51:49 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:32.481 08:51:49 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:32.481 08:51:49 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:32.481 08:51:49 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:32.481 08:51:49 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:32.481 08:51:49 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:32.481 08:51:49 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:32.481 08:51:49 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.481 08:51:49 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:32.481 08:51:49 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:32.481 08:51:49 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.481 08:51:49 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:32.481 08:51:49 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:32.481 08:51:49 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:32.481 08:51:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.481 08:51:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.481 08:51:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.481 08:51:49 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:32.481 08:51:49 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:32.481 08:51:49 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:32.481 08:51:49 -- common/autotest_common.sh@10 -- # set +x 00:17:39.095 08:51:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:39.095 08:51:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.095 08:51:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.095 08:51:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.095 08:51:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.095 08:51:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.095 08:51:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.095 08:51:55 -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.095 08:51:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.095 08:51:55 -- nvmf/common.sh@296 -- # e810=() 00:17:39.095 08:51:55 -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.095 08:51:55 -- nvmf/common.sh@297 -- # x722=() 00:17:39.095 08:51:55 -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.095 08:51:55 -- nvmf/common.sh@298 -- # mlx=() 00:17:39.095 08:51:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.095 08:51:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.095 08:51:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.095 08:51:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.095 08:51:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.095 08:51:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.095 08:51:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.095 08:51:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.095 08:51:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.095 08:51:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.095 08:51:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.095 08:51:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.095 08:51:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.095 08:51:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:39.095 08:51:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.095 08:51:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.095 08:51:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:39.095 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:39.095 08:51:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.095 08:51:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:39.095 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:39.095 08:51:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.095 08:51:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.095 08:51:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.095 08:51:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:39.095 08:51:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.095 08:51:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:39.095 Found net devices under 0000:af:00.0: cvl_0_0 00:17:39.095 08:51:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.095 08:51:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.095 08:51:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.095 08:51:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:39.095 08:51:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.095 08:51:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:39.095 Found net devices under 0000:af:00.1: cvl_0_1 00:17:39.095 08:51:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.095 08:51:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:39.095 08:51:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:39.095 08:51:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:39.095 08:51:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:39.095 08:51:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.095 08:51:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.095 08:51:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.095 08:51:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.095 08:51:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.095 08:51:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.095 08:51:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.095 08:51:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.095 08:51:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.095 08:51:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.095 08:51:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.095 08:51:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.095 08:51:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.095 08:51:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.095 08:51:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.095 08:51:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:39.095 08:51:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.095 08:51:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.095 08:51:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.095 08:51:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:39.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:17:39.095 00:17:39.095 --- 10.0.0.2 ping statistics --- 00:17:39.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.095 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:39.095 08:51:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:17:39.096 00:17:39.096 --- 10.0.0.1 ping statistics --- 00:17:39.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.096 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:17:39.096 08:51:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.096 08:51:56 -- nvmf/common.sh@411 -- # return 0 00:17:39.096 08:51:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:39.096 08:51:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.096 08:51:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:39.096 08:51:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:39.096 08:51:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.096 08:51:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:39.096 08:51:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:39.096 08:51:56 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:39.096 08:51:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:39.096 08:51:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:39.096 08:51:56 -- common/autotest_common.sh@10 -- # set +x 00:17:39.096 08:51:56 -- nvmf/common.sh@470 -- # nvmfpid=2044604 00:17:39.096 08:51:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:39.096 08:51:56 -- nvmf/common.sh@471 -- # waitforlisten 2044604 00:17:39.096 08:51:56 -- common/autotest_common.sh@817 -- # '[' -z 2044604 ']' 00:17:39.096 08:51:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.096 08:51:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:39.096 08:51:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.096 08:51:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:39.096 08:51:56 -- common/autotest_common.sh@10 -- # set +x 00:17:39.356 [2024-04-26 08:51:56.356749] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:17:39.356 [2024-04-26 08:51:56.356794] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.356 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.356 [2024-04-26 08:51:56.429583] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:39.356 [2024-04-26 08:51:56.500359] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:39.356 [2024-04-26 08:51:56.500397] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:39.356 [2024-04-26 08:51:56.500406] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:39.356 [2024-04-26 08:51:56.500415] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:39.356 [2024-04-26 08:51:56.500422] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:39.356 [2024-04-26 08:51:56.500472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.356 [2024-04-26 08:51:56.500531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.356 [2024-04-26 08:51:56.500533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.925 08:51:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:39.925 08:51:57 -- common/autotest_common.sh@850 -- # return 0 00:17:39.925 08:51:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:39.925 08:51:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:39.925 08:51:57 -- common/autotest_common.sh@10 -- # set +x 00:17:40.185 08:51:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.185 08:51:57 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:40.185 [2024-04-26 08:51:57.357403] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.185 08:51:57 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:40.444 08:51:57 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:40.444 08:51:57 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:40.703 08:51:57 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:40.703 08:51:57 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:40.703 08:51:57 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:40.963 08:51:58 -- target/nvmf_lvol.sh@29 -- # lvs=822bd1a2-073d-49a8-a9b5-083d3fbc838a 00:17:40.963 08:51:58 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 822bd1a2-073d-49a8-a9b5-083d3fbc838a lvol 20 00:17:41.222 08:51:58 -- target/nvmf_lvol.sh@32 -- # lvol=5f9db81c-a7f2-4c5a-96c1-35ecaa6ef874 00:17:41.223 08:51:58 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:41.482 08:51:58 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f9db81c-a7f2-4c5a-96c1-35ecaa6ef874 00:17:41.482 08:51:58 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:41.741 [2024-04-26 08:51:58.824295] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.741 08:51:58 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:42.000 08:51:59 -- target/nvmf_lvol.sh@42 -- # perf_pid=2045167 00:17:42.000 08:51:59 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:42.000 08:51:59 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:42.000 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.935 08:52:00 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5f9db81c-a7f2-4c5a-96c1-35ecaa6ef874 MY_SNAPSHOT 00:17:43.193 08:52:00 -- target/nvmf_lvol.sh@47 -- # snapshot=558e6665-dd30-4e21-adbf-9604f0b17acb 00:17:43.193 08:52:00 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5f9db81c-a7f2-4c5a-96c1-35ecaa6ef874 30 00:17:43.452 08:52:00 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 558e6665-dd30-4e21-adbf-9604f0b17acb MY_CLONE 00:17:43.452 08:52:00 -- target/nvmf_lvol.sh@49 -- # clone=8734e798-d382-46da-b132-eb73eda0a6c4 00:17:43.452 08:52:00 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8734e798-d382-46da-b132-eb73eda0a6c4 00:17:44.019 08:52:01 -- target/nvmf_lvol.sh@53 -- # wait 2045167 00:17:53.994 Initializing NVMe Controllers 00:17:53.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:53.994 Controller IO queue size 128, less than required. 00:17:53.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:53.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:53.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:53.994 Initialization complete. Launching workers. 00:17:53.994 ======================================================== 00:17:53.994 Latency(us) 00:17:53.994 Device Information : IOPS MiB/s Average min max 00:17:53.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12111.00 47.31 10574.55 1605.16 46937.04 00:17:53.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11959.90 46.72 10707.14 3697.05 44135.03 00:17:53.994 ======================================================== 00:17:53.994 Total : 24070.90 94.03 10640.43 1605.16 46937.04 00:17:53.994 00:17:53.994 08:52:09 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:53.994 08:52:09 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5f9db81c-a7f2-4c5a-96c1-35ecaa6ef874 00:17:53.994 08:52:09 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 822bd1a2-073d-49a8-a9b5-083d3fbc838a 00:17:53.994 08:52:09 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:53.994 08:52:09 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:53.994 08:52:09 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:53.994 08:52:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:53.994 08:52:09 -- nvmf/common.sh@117 -- # sync 00:17:53.994 08:52:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.994 08:52:09 -- nvmf/common.sh@120 -- # set +e 00:17:53.994 08:52:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.994 08:52:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.994 rmmod nvme_tcp 00:17:53.994 rmmod nvme_fabrics 00:17:53.994 rmmod nvme_keyring 00:17:53.994 08:52:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.994 08:52:10 -- nvmf/common.sh@124 -- # set -e 00:17:53.994 08:52:10 -- nvmf/common.sh@125 -- # return 0 00:17:53.994 08:52:10 -- nvmf/common.sh@478 -- # '[' -n 2044604 ']' 00:17:53.994 08:52:10 -- nvmf/common.sh@479 -- # killprocess 2044604 00:17:53.994 08:52:10 -- common/autotest_common.sh@936 -- # '[' -z 2044604 ']' 00:17:53.994 08:52:10 -- common/autotest_common.sh@940 -- # kill -0 2044604 00:17:53.994 08:52:10 -- common/autotest_common.sh@941 -- # uname 00:17:53.994 08:52:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:53.994 08:52:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2044604 00:17:53.994 08:52:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:53.994 08:52:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:53.994 08:52:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2044604' 00:17:53.994 killing process with pid 2044604 00:17:53.994 08:52:10 -- common/autotest_common.sh@955 -- # kill 2044604 00:17:53.994 08:52:10 -- common/autotest_common.sh@960 -- # wait 2044604 00:17:53.994 08:52:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:53.994 08:52:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:53.994 08:52:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:53.994 08:52:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.994 08:52:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.994 08:52:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.994 08:52:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.994 08:52:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.423 08:52:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.423 00:17:55.423 real 0m23.033s 00:17:55.423 user 1m2.207s 00:17:55.423 sys 0m9.930s 00:17:55.423 08:52:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:55.423 08:52:12 -- common/autotest_common.sh@10 -- # set +x 00:17:55.423 ************************************ 00:17:55.423 END TEST nvmf_lvol 00:17:55.423 ************************************ 00:17:55.423 08:52:12 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:55.423 08:52:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:55.423 08:52:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:55.423 08:52:12 -- common/autotest_common.sh@10 -- # set +x 00:17:55.423 ************************************ 00:17:55.423 START TEST nvmf_lvs_grow 00:17:55.423 ************************************ 00:17:55.423 08:52:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:55.681 * Looking for test storage... 00:17:55.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.681 08:52:12 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.681 08:52:12 -- nvmf/common.sh@7 -- # uname -s 00:17:55.681 08:52:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.681 08:52:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.681 08:52:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.681 08:52:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.681 08:52:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.681 08:52:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.681 08:52:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.681 08:52:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.681 08:52:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.681 08:52:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.681 08:52:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:55.681 08:52:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:55.681 08:52:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.681 08:52:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.681 08:52:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.681 08:52:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.681 08:52:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.681 08:52:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.681 08:52:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.681 08:52:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.681 08:52:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.682 08:52:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.682 08:52:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.682 08:52:12 -- paths/export.sh@5 -- # export PATH 00:17:55.682 08:52:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.682 08:52:12 -- nvmf/common.sh@47 -- # : 0 00:17:55.682 08:52:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.682 08:52:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.682 08:52:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.682 08:52:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.682 08:52:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.682 08:52:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.682 08:52:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.682 08:52:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.682 08:52:12 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:55.682 08:52:12 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.682 08:52:12 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:55.682 08:52:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:55.682 08:52:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.682 08:52:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:55.682 08:52:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:55.682 08:52:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:55.682 08:52:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.682 08:52:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.682 08:52:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.682 08:52:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:55.682 08:52:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:55.682 08:52:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.682 08:52:12 -- common/autotest_common.sh@10 -- # set +x 00:18:02.252 08:52:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:02.252 08:52:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:02.252 08:52:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:02.252 08:52:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:02.252 08:52:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:02.252 08:52:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:02.252 08:52:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:02.252 08:52:19 -- nvmf/common.sh@295 -- # net_devs=() 00:18:02.252 08:52:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:02.252 08:52:19 -- nvmf/common.sh@296 -- # e810=() 00:18:02.252 08:52:19 -- nvmf/common.sh@296 -- # local -ga e810 00:18:02.252 08:52:19 -- nvmf/common.sh@297 -- # x722=() 00:18:02.252 08:52:19 -- nvmf/common.sh@297 -- # local -ga x722 00:18:02.252 08:52:19 -- nvmf/common.sh@298 -- # mlx=() 00:18:02.252 08:52:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:02.252 08:52:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.252 08:52:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.252 08:52:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.252 08:52:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.252 08:52:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.252 08:52:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.252 08:52:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.252 08:52:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.252 08:52:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.252 08:52:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.252 08:52:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.252 08:52:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:02.252 08:52:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:02.252 08:52:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:02.252 08:52:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.252 08:52:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:02.252 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:02.252 08:52:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.252 08:52:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:02.252 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:02.252 08:52:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:02.252 08:52:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:02.252 08:52:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.252 08:52:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.252 08:52:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:02.252 08:52:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.253 08:52:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:02.253 Found net devices under 0000:af:00.0: cvl_0_0 00:18:02.253 08:52:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.253 08:52:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.253 08:52:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.253 08:52:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:02.253 08:52:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.253 08:52:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:02.253 Found net devices under 0000:af:00.1: cvl_0_1 00:18:02.253 08:52:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.253 08:52:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:02.253 08:52:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:02.253 08:52:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:02.253 08:52:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:02.253 08:52:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:02.253 08:52:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.253 08:52:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.253 08:52:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.253 08:52:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:02.253 08:52:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.253 08:52:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.253 08:52:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:02.253 08:52:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.253 08:52:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.253 08:52:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:02.253 08:52:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:02.253 08:52:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.253 08:52:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.253 08:52:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.253 08:52:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.253 08:52:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:02.253 08:52:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.253 08:52:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.253 08:52:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.253 08:52:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:02.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:18:02.253 00:18:02.253 --- 10.0.0.2 ping statistics --- 00:18:02.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.253 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:18:02.253 08:52:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:18:02.253 00:18:02.253 --- 10.0.0.1 ping statistics --- 00:18:02.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.253 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:18:02.253 08:52:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.253 08:52:19 -- nvmf/common.sh@411 -- # return 0 00:18:02.253 08:52:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:02.253 08:52:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.253 08:52:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:02.253 08:52:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:02.253 08:52:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.253 08:52:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:02.253 08:52:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:02.253 08:52:19 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:02.253 08:52:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:02.253 08:52:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:02.253 08:52:19 -- common/autotest_common.sh@10 -- # set +x 00:18:02.253 08:52:19 -- nvmf/common.sh@470 -- # nvmfpid=2050738 00:18:02.253 08:52:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:02.253 08:52:19 -- nvmf/common.sh@471 -- # waitforlisten 2050738 00:18:02.253 08:52:19 -- common/autotest_common.sh@817 -- # '[' -z 2050738 ']' 00:18:02.253 08:52:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.253 08:52:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:02.253 08:52:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.253 08:52:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:02.253 08:52:19 -- common/autotest_common.sh@10 -- # set +x 00:18:02.513 [2024-04-26 08:52:19.543161] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:18:02.513 [2024-04-26 08:52:19.543208] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.513 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.513 [2024-04-26 08:52:19.614216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.513 [2024-04-26 08:52:19.681092] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.513 [2024-04-26 08:52:19.681129] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.513 [2024-04-26 08:52:19.681138] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.513 [2024-04-26 08:52:19.681147] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.513 [2024-04-26 08:52:19.681154] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.513 [2024-04-26 08:52:19.681174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.451 08:52:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:03.451 08:52:20 -- common/autotest_common.sh@850 -- # return 0 00:18:03.451 08:52:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:03.451 08:52:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:03.451 08:52:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.451 08:52:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.451 08:52:20 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:03.451 [2024-04-26 08:52:20.536519] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.451 08:52:20 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:03.451 08:52:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:03.451 08:52:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:03.451 08:52:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.710 ************************************ 00:18:03.711 START TEST lvs_grow_clean 00:18:03.711 ************************************ 00:18:03.711 08:52:20 -- common/autotest_common.sh@1111 -- # lvs_grow 00:18:03.711 08:52:20 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:03.711 08:52:20 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:03.711 08:52:20 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:03.711 08:52:20 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:03.711 08:52:20 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:03.711 08:52:20 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:03.711 08:52:20 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:03.711 08:52:20 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:03.711 08:52:20 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:03.970 08:52:20 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:03.970 08:52:20 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:03.970 08:52:21 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c56886ae-4260-4679-92ff-87aed89bbd43 00:18:03.970 08:52:21 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c56886ae-4260-4679-92ff-87aed89bbd43 00:18:03.970 08:52:21 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:04.229 08:52:21 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:04.229 08:52:21 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:04.229 08:52:21 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c56886ae-4260-4679-92ff-87aed89bbd43 lvol 150 00:18:04.489 08:52:21 -- target/nvmf_lvs_grow.sh@33 -- # lvol=8be1f96f-711f-4302-87ac-b15d65102159 00:18:04.489 08:52:21 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:04.489 08:52:21 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:04.489 [2024-04-26 08:52:21.646644] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:04.489 [2024-04-26 08:52:21.646691] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:04.489 true 00:18:04.489 08:52:21 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c56886ae-4260-4679-92ff-87aed89bbd43 00:18:04.489 08:52:21 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:04.748 08:52:21 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:04.748 08:52:21 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:04.748 08:52:21 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8be1f96f-711f-4302-87ac-b15d65102159 00:18:05.007 08:52:22 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:05.266 [2024-04-26 08:52:22.292590] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.266 08:52:22 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:05.266 08:52:22 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2051295 00:18:05.266 08:52:22 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:05.266 08:52:22 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:05.266 08:52:22 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2051295 /var/tmp/bdevperf.sock 00:18:05.266 08:52:22 -- common/autotest_common.sh@817 -- # '[' -z 2051295 ']' 00:18:05.266 08:52:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.266 08:52:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:05.266 08:52:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.266 08:52:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:05.266 08:52:22 -- common/autotest_common.sh@10 -- # set +x 00:18:05.525 [2024-04-26 08:52:22.517198] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:18:05.525 [2024-04-26 08:52:22.517247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051295 ] 00:18:05.525 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.525 [2024-04-26 08:52:22.588114] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.525 [2024-04-26 08:52:22.659106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.093 08:52:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:06.093 08:52:23 -- common/autotest_common.sh@850 -- # return 0 00:18:06.093 08:52:23 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:06.353 Nvme0n1 00:18:06.353 08:52:23 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:06.612 [ 00:18:06.612 { 00:18:06.612 "name": "Nvme0n1", 00:18:06.612 "aliases": [ 00:18:06.612 "8be1f96f-711f-4302-87ac-b15d65102159" 00:18:06.612 ], 00:18:06.612 "product_name": "NVMe disk", 00:18:06.612 "block_size": 4096, 00:18:06.612 "num_blocks": 38912, 00:18:06.612 "uuid": "8be1f96f-711f-4302-87ac-b15d65102159", 00:18:06.612 "assigned_rate_limits": { 00:18:06.612 "rw_ios_per_sec": 0, 00:18:06.612 "rw_mbytes_per_sec": 0, 00:18:06.612 "r_mbytes_per_sec": 0, 00:18:06.612 "w_mbytes_per_sec": 0 00:18:06.612 }, 00:18:06.612 "claimed": false, 00:18:06.612 "zoned": false, 00:18:06.612 "supported_io_types": { 00:18:06.612 "read": true, 00:18:06.612 "write": true, 00:18:06.612 "unmap": true, 00:18:06.612 "write_zeroes": true, 00:18:06.612 "flush": true, 00:18:06.612 "reset": true, 00:18:06.612 "compare": true, 00:18:06.612 "compare_and_write": true, 00:18:06.612 "abort": true, 00:18:06.612 "nvme_admin": true, 00:18:06.612 "nvme_io": true 00:18:06.612 }, 00:18:06.612 "memory_domains": [ 00:18:06.612 { 00:18:06.612 "dma_device_id": "system", 00:18:06.612 "dma_device_type": 1 00:18:06.612 } 00:18:06.612 ], 00:18:06.612 "driver_specific": { 00:18:06.612 "nvme": [ 00:18:06.612 { 00:18:06.612 "trid": { 00:18:06.612 "trtype": "TCP", 00:18:06.612 "adrfam": "IPv4", 00:18:06.612 "traddr": "10.0.0.2", 00:18:06.612 "trsvcid": "4420", 00:18:06.612 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:06.612 }, 00:18:06.612 "ctrlr_data": { 00:18:06.612 "cntlid": 1, 00:18:06.612 "vendor_id": "0x8086", 00:18:06.612 "model_number": "SPDK bdev Controller", 00:18:06.612 "serial_number": "SPDK0", 00:18:06.612 "firmware_revision": "24.05", 00:18:06.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:06.612 "oacs": { 00:18:06.612 "security": 0, 00:18:06.612 "format": 0, 00:18:06.612 "firmware": 0, 00:18:06.612 "ns_manage": 0 00:18:06.612 }, 00:18:06.612 "multi_ctrlr": true, 00:18:06.612 "ana_reporting": false 00:18:06.612 }, 00:18:06.612 "vs": { 00:18:06.612 "nvme_version": "1.3" 00:18:06.612 }, 00:18:06.612 "ns_data": { 00:18:06.612 "id": 1, 00:18:06.612 "can_share": true 00:18:06.612 } 00:18:06.612 } 00:18:06.612 ], 00:18:06.612 "mp_policy": "active_passive" 00:18:06.612 } 00:18:06.612 } 00:18:06.612 ] 00:18:06.612 08:52:23 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2051420 00:18:06.612 08:52:23 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.612 08:52:23 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:06.612 Running I/O for 10 seconds... 00:18:07.577 Latency(us) 00:18:07.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.577 Nvme0n1 : 1.00 22336.00 87.25 0.00 0.00 0.00 0.00 0.00 00:18:07.577 =================================================================================================================== 00:18:07.577 Total : 22336.00 87.25 0.00 0.00 0.00 0.00 0.00 00:18:07.577 00:18:08.520 08:52:25 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c56886ae-4260-4679-92ff-87aed89bbd43 00:18:08.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.780 Nvme0n1 : 2.00 22372.00 87.39 0.00 0.00 0.00 0.00 0.00 00:18:08.780 =================================================================================================================== 00:18:08.780 Total : 22372.00 87.39 0.00 0.00 0.00 0.00 0.00 00:18:08.780 00:18:08.780 true 00:18:08.780 08:52:25 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c56886ae-4260-4679-92ff-87aed89bbd43 00:18:08.780 08:52:25 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:09.041 08:52:26 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:09.041 08:52:26 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:09.041 08:52:26 -- target/nvmf_lvs_grow.sh@65 -- # wait 2051420 00:18:09.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.610 Nvme0n1 : 3.00 22510.67 87.93 0.00 0.00 0.00 0.00 0.00 00:18:09.610 =================================================================================================================== 00:18:09.610 Total : 22510.67 87.93 0.00 0.00 0.00 0.00 0.00 00:18:09.610 00:18:10.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.990 Nvme0n1 : 4.00 22587.00 88.23 0.00 0.00 0.00 0.00 0.00 00:18:10.990 =================================================================================================================== 00:18:10.990 Total : 22587.00 88.23 0.00 0.00 0.00 0.00 0.00 00:18:10.990 00:18:11.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.932 Nvme0n1 : 5.00 22718.40 88.74 0.00 0.00 0.00 0.00 0.00 00:18:11.932 =================================================================================================================== 00:18:11.932 Total : 22718.40 88.74 0.00 0.00 0.00 0.00 0.00 00:18:11.932 00:18:12.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.869 Nvme0n1 : 6.00 22837.33 89.21 0.00 0.00 0.00 0.00 0.00 00:18:12.869 =================================================================================================================== 00:18:12.869 Total : 22837.33 89.21 0.00 0.00 0.00 0.00 0.00 00:18:12.869 00:18:13.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.808 Nvme0n1 : 7.00 22884.57 89.39 0.00 0.00 0.00 0.00 0.00 00:18:13.808 =================================================================================================================== 00:18:13.808 Total : 22884.57 89.39 0.00 0.00 0.00 0.00 0.00 00:18:13.808 00:18:14.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.747 Nvme0n1 : 8.00 22948.00 89.64 0.00 0.00 0.00 0.00 0.00 00:18:14.747 =================================================================================================================== 00:18:14.747 Total : 22948.00 89.64 0.00 0.00 0.00 0.00 0.00 00:18:14.747 00:18:15.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.693 Nvme0n1 : 9.00 22971.00 89.73 0.00 0.00 0.00 0.00 0.00 00:18:15.693 =================================================================================================================== 00:18:15.693 Total : 22971.00 89.73 0.00 0.00 0.00 0.00 0.00 00:18:15.693 00:18:16.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.633 Nvme0n1 : 10.00 22992.10 89.81 0.00 0.00 0.00 0.00 0.00 00:18:16.633 =================================================================================================================== 00:18:16.633 Total : 22992.10 89.81 0.00 0.00 0.00 0.00 0.00 00:18:16.633 00:18:16.633 00:18:16.633 Latency(us) 00:18:16.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.633 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.633 Nvme0n1 : 10.01 22992.78 89.82 0.00 0.00 5563.63 3053.98 24326.96 00:18:16.633 =================================================================================================================== 00:18:16.633 Total : 22992.78 89.82 0.00 0.00 5563.63 3053.98 24326.96 00:18:16.633 0 00:18:16.633 08:52:33 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2051295 00:18:16.633 08:52:33 -- common/autotest_common.sh@936 -- # '[' -z 2051295 ']' 00:18:16.633 08:52:33 -- common/autotest_common.sh@940 -- # kill -0 2051295 00:18:16.633 08:52:33 -- common/autotest_common.sh@941 -- # uname 00:18:16.633 08:52:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:16.633 08:52:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2051295 00:18:16.893 08:52:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:16.893 08:52:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:16.893 08:52:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2051295' 00:18:16.893 killing process with pid 2051295 00:18:16.893 08:52:33 -- common/autotest_common.sh@955 -- # kill 2051295 00:18:16.893 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.893 00:18:16.893 Latency(us) 00:18:16.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.893 =================================================================================================================== 00:18:16.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.893 08:52:33 -- common/autotest_common.sh@960 -- # wait 2051295 00:18:16.893 08:52:34 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:17.152 08:52:34 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c56886ae-4260-4679-92ff-87aed89bbd43 00:18:17.152 08:52:34 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:17.412 08:52:34 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:17.412 08:52:34 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:17.412 08:52:34 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:17.412 [2024-04-26 08:52:34.625724] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:17.672 08:52:34 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c56886ae-4260-4679-92ff-87aed89bbd43 00:18:17.672 08:52:34 -- common/autotest_common.sh@638 -- # local es=0 00:18:17.672 08:52:34 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c56886ae-4260-4679-92ff-87aed89bbd43 00:18:17.672 08:52:34 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.672 08:52:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:17.672 08:52:34 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.672 08:52:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:17.672 08:52:34 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.672 08:52:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:17.672 08:52:34 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.672 08:52:34 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:17.672 08:52:34 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c56886ae-4260-4679-92ff-87aed89bbd43 00:18:17.672 request: 00:18:17.672 { 00:18:17.672 "uuid": "c56886ae-4260-4679-92ff-87aed89bbd43", 00:18:17.672 "method": "bdev_lvol_get_lvstores", 00:18:17.672 "req_id": 1 00:18:17.672 } 00:18:17.672 Got JSON-RPC error response 00:18:17.672 response: 00:18:17.672 { 00:18:17.672 "code": -19, 00:18:17.672 "message": "No such device" 00:18:17.672 } 00:18:17.672 08:52:34 -- common/autotest_common.sh@641 -- # es=1 00:18:17.672 08:52:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:17.672 08:52:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:17.672 08:52:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:17.672 08:52:34 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:17.932 aio_bdev 00:18:17.932 08:52:35 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 8be1f96f-711f-4302-87ac-b15d65102159 00:18:17.932 08:52:35 -- common/autotest_common.sh@885 -- # local bdev_name=8be1f96f-711f-4302-87ac-b15d65102159 00:18:17.932 08:52:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:17.932 08:52:35 -- common/autotest_common.sh@887 -- # local i 00:18:17.932 08:52:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:17.932 08:52:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:17.932 08:52:35 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:18.192 08:52:35 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8be1f96f-711f-4302-87ac-b15d65102159 -t 2000 00:18:18.192 [ 00:18:18.192 { 00:18:18.192 "name": "8be1f96f-711f-4302-87ac-b15d65102159", 00:18:18.192 "aliases": [ 00:18:18.192 "lvs/lvol" 00:18:18.192 ], 00:18:18.192 "product_name": "Logical Volume", 00:18:18.192 "block_size": 4096, 00:18:18.192 "num_blocks": 38912, 00:18:18.192 "uuid": "8be1f96f-711f-4302-87ac-b15d65102159", 00:18:18.192 "assigned_rate_limits": { 00:18:18.192 "rw_ios_per_sec": 0, 00:18:18.192 "rw_mbytes_per_sec": 0, 00:18:18.192 "r_mbytes_per_sec": 0, 00:18:18.192 "w_mbytes_per_sec": 0 00:18:18.192 }, 00:18:18.192 "claimed": false, 00:18:18.192 "zoned": false, 00:18:18.192 "supported_io_types": { 00:18:18.192 "read": true, 00:18:18.192 "write": true, 00:18:18.192 "unmap": true, 00:18:18.192 "write_zeroes": true, 00:18:18.192 "flush": false, 00:18:18.192 "reset": true, 00:18:18.192 "compare": false, 00:18:18.192 "compare_and_write": false, 00:18:18.192 "abort": false, 00:18:18.192 "nvme_admin": false, 00:18:18.192 "nvme_io": false 00:18:18.192 }, 00:18:18.192 "driver_specific": { 00:18:18.192 "lvol": { 00:18:18.192 "lvol_store_uuid": "c56886ae-4260-4679-92ff-87aed89bbd43", 00:18:18.192 "base_bdev": "aio_bdev", 00:18:18.192 "thin_provision": false, 00:18:18.192 "snapshot": false, 00:18:18.192 "clone": false, 00:18:18.192 "esnap_clone": false 00:18:18.192 } 00:18:18.192 } 00:18:18.192 } 00:18:18.192 ] 00:18:18.192 08:52:35 -- common/autotest_common.sh@893 -- # return 0 00:18:18.192 08:52:35 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c56886ae-4260-4679-92ff-87aed89bbd43 00:18:18.192 08:52:35 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:18.452 08:52:35 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:18.452 08:52:35 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c56886ae-4260-4679-92ff-87aed89bbd43 00:18:18.452 08:52:35 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:18.714 08:52:35 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:18.714 08:52:35 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8be1f96f-711f-4302-87ac-b15d65102159 00:18:18.714 08:52:35 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c56886ae-4260-4679-92ff-87aed89bbd43 00:18:18.994 08:52:36 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:19.253 08:52:36 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:19.253 00:18:19.253 real 0m15.579s 00:18:19.253 user 0m14.629s 00:18:19.253 sys 0m2.034s 00:18:19.253 08:52:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:19.253 08:52:36 -- common/autotest_common.sh@10 -- # set +x 00:18:19.253 ************************************ 00:18:19.253 END TEST lvs_grow_clean 00:18:19.253 ************************************ 00:18:19.253 08:52:36 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:19.253 08:52:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:19.253 08:52:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:19.253 08:52:36 -- common/autotest_common.sh@10 -- # set +x 00:18:19.253 ************************************ 00:18:19.253 START TEST lvs_grow_dirty 00:18:19.253 ************************************ 00:18:19.253 08:52:36 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:18:19.253 08:52:36 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:19.253 08:52:36 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:19.253 08:52:36 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:19.253 08:52:36 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:19.253 08:52:36 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:19.253 08:52:36 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:19.253 08:52:36 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:19.253 08:52:36 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:19.513 08:52:36 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:19.513 08:52:36 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:19.513 08:52:36 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:19.773 08:52:36 -- target/nvmf_lvs_grow.sh@28 -- # lvs=085ef084-0598-41a6-89bf-7f8792921a24 00:18:19.773 08:52:36 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:19.773 08:52:36 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:20.034 08:52:37 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:20.034 08:52:37 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:20.034 08:52:37 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 085ef084-0598-41a6-89bf-7f8792921a24 lvol 150 00:18:20.034 08:52:37 -- target/nvmf_lvs_grow.sh@33 -- # lvol=3da89a31-bf6c-419f-83fc-a1a5a4e97a81 00:18:20.034 08:52:37 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:20.034 08:52:37 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:20.293 [2024-04-26 08:52:37.375121] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:20.293 [2024-04-26 08:52:37.375166] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:20.293 true 00:18:20.293 08:52:37 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:20.293 08:52:37 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:20.553 08:52:37 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:20.553 08:52:37 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:20.553 08:52:37 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3da89a31-bf6c-419f-83fc-a1a5a4e97a81 00:18:20.813 08:52:37 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:20.813 08:52:38 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:21.073 08:52:38 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:21.073 08:52:38 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2054054 00:18:21.073 08:52:38 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:21.073 08:52:38 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2054054 /var/tmp/bdevperf.sock 00:18:21.073 08:52:38 -- common/autotest_common.sh@817 -- # '[' -z 2054054 ']' 00:18:21.073 08:52:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.073 08:52:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:21.073 08:52:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.073 08:52:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:21.073 08:52:38 -- common/autotest_common.sh@10 -- # set +x 00:18:21.073 [2024-04-26 08:52:38.223115] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:18:21.073 [2024-04-26 08:52:38.223167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054054 ] 00:18:21.073 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.073 [2024-04-26 08:52:38.292785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.333 [2024-04-26 08:52:38.367041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.902 08:52:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.902 08:52:39 -- common/autotest_common.sh@850 -- # return 0 00:18:21.902 08:52:39 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:22.161 Nvme0n1 00:18:22.430 08:52:39 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:22.430 [ 00:18:22.430 { 00:18:22.430 "name": "Nvme0n1", 00:18:22.430 "aliases": [ 00:18:22.430 "3da89a31-bf6c-419f-83fc-a1a5a4e97a81" 00:18:22.430 ], 00:18:22.430 "product_name": "NVMe disk", 00:18:22.430 "block_size": 4096, 00:18:22.430 "num_blocks": 38912, 00:18:22.430 "uuid": "3da89a31-bf6c-419f-83fc-a1a5a4e97a81", 00:18:22.430 "assigned_rate_limits": { 00:18:22.430 "rw_ios_per_sec": 0, 00:18:22.430 "rw_mbytes_per_sec": 0, 00:18:22.430 "r_mbytes_per_sec": 0, 00:18:22.430 "w_mbytes_per_sec": 0 00:18:22.430 }, 00:18:22.430 "claimed": false, 00:18:22.430 "zoned": false, 00:18:22.430 "supported_io_types": { 00:18:22.430 "read": true, 00:18:22.430 "write": true, 00:18:22.430 "unmap": true, 00:18:22.430 "write_zeroes": true, 00:18:22.430 "flush": true, 00:18:22.430 "reset": true, 00:18:22.430 "compare": true, 00:18:22.430 "compare_and_write": true, 00:18:22.430 "abort": true, 00:18:22.430 "nvme_admin": true, 00:18:22.430 "nvme_io": true 00:18:22.430 }, 00:18:22.430 "memory_domains": [ 00:18:22.430 { 00:18:22.430 "dma_device_id": "system", 00:18:22.430 "dma_device_type": 1 00:18:22.430 } 00:18:22.430 ], 00:18:22.430 "driver_specific": { 00:18:22.430 "nvme": [ 00:18:22.430 { 00:18:22.430 "trid": { 00:18:22.430 "trtype": "TCP", 00:18:22.430 "adrfam": "IPv4", 00:18:22.430 "traddr": "10.0.0.2", 00:18:22.430 "trsvcid": "4420", 00:18:22.430 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:22.430 }, 00:18:22.430 "ctrlr_data": { 00:18:22.430 "cntlid": 1, 00:18:22.430 "vendor_id": "0x8086", 00:18:22.430 "model_number": "SPDK bdev Controller", 00:18:22.430 "serial_number": "SPDK0", 00:18:22.430 "firmware_revision": "24.05", 00:18:22.430 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:22.430 "oacs": { 00:18:22.430 "security": 0, 00:18:22.430 "format": 0, 00:18:22.430 "firmware": 0, 00:18:22.430 "ns_manage": 0 00:18:22.430 }, 00:18:22.430 "multi_ctrlr": true, 00:18:22.430 "ana_reporting": false 00:18:22.430 }, 00:18:22.430 "vs": { 00:18:22.430 "nvme_version": "1.3" 00:18:22.430 }, 00:18:22.430 "ns_data": { 00:18:22.430 "id": 1, 00:18:22.430 "can_share": true 00:18:22.430 } 00:18:22.430 } 00:18:22.430 ], 00:18:22.430 "mp_policy": "active_passive" 00:18:22.430 } 00:18:22.431 } 00:18:22.431 ] 00:18:22.431 08:52:39 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2054228 00:18:22.431 08:52:39 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:22.431 08:52:39 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:22.431 Running I/O for 10 seconds... 00:18:23.462 Latency(us) 00:18:23.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:23.462 Nvme0n1 : 1.00 21618.00 84.45 0.00 0.00 0.00 0.00 0.00 00:18:23.462 =================================================================================================================== 00:18:23.462 Total : 21618.00 84.45 0.00 0.00 0.00 0.00 0.00 00:18:23.462 00:18:24.398 08:52:41 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:24.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:24.657 Nvme0n1 : 2.00 22141.00 86.49 0.00 0.00 0.00 0.00 0.00 00:18:24.657 =================================================================================================================== 00:18:24.657 Total : 22141.00 86.49 0.00 0.00 0.00 0.00 0.00 00:18:24.657 00:18:24.657 true 00:18:24.657 08:52:41 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:24.657 08:52:41 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:24.915 08:52:41 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:24.915 08:52:41 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:24.915 08:52:41 -- target/nvmf_lvs_grow.sh@65 -- # wait 2054228 00:18:25.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.484 Nvme0n1 : 3.00 22306.00 87.13 0.00 0.00 0.00 0.00 0.00 00:18:25.484 =================================================================================================================== 00:18:25.484 Total : 22306.00 87.13 0.00 0.00 0.00 0.00 0.00 00:18:25.484 00:18:26.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:26.420 Nvme0n1 : 4.00 22389.00 87.46 0.00 0.00 0.00 0.00 0.00 00:18:26.420 =================================================================================================================== 00:18:26.420 Total : 22389.00 87.46 0.00 0.00 0.00 0.00 0.00 00:18:26.420 00:18:27.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.800 Nvme0n1 : 5.00 22458.60 87.73 0.00 0.00 0.00 0.00 0.00 00:18:27.800 =================================================================================================================== 00:18:27.800 Total : 22458.60 87.73 0.00 0.00 0.00 0.00 0.00 00:18:27.800 00:18:28.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:28.738 Nvme0n1 : 6.00 22520.83 87.97 0.00 0.00 0.00 0.00 0.00 00:18:28.738 =================================================================================================================== 00:18:28.738 Total : 22520.83 87.97 0.00 0.00 0.00 0.00 0.00 00:18:28.738 00:18:29.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:29.712 Nvme0n1 : 7.00 22576.71 88.19 0.00 0.00 0.00 0.00 0.00 00:18:29.712 =================================================================================================================== 00:18:29.712 Total : 22576.71 88.19 0.00 0.00 0.00 0.00 0.00 00:18:29.712 00:18:30.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.649 Nvme0n1 : 8.00 22605.25 88.30 0.00 0.00 0.00 0.00 0.00 00:18:30.649 =================================================================================================================== 00:18:30.649 Total : 22605.25 88.30 0.00 0.00 0.00 0.00 0.00 00:18:30.649 00:18:31.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:31.586 Nvme0n1 : 9.00 22637.00 88.43 0.00 0.00 0.00 0.00 0.00 00:18:31.586 =================================================================================================================== 00:18:31.586 Total : 22637.00 88.43 0.00 0.00 0.00 0.00 0.00 00:18:31.586 00:18:32.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.524 Nvme0n1 : 10.00 22669.40 88.55 0.00 0.00 0.00 0.00 0.00 00:18:32.524 =================================================================================================================== 00:18:32.524 Total : 22669.40 88.55 0.00 0.00 0.00 0.00 0.00 00:18:32.524 00:18:32.524 00:18:32.524 Latency(us) 00:18:32.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.524 Nvme0n1 : 10.01 22668.98 88.55 0.00 0.00 5642.60 2922.91 27892.12 00:18:32.524 =================================================================================================================== 00:18:32.524 Total : 22668.98 88.55 0.00 0.00 5642.60 2922.91 27892.12 00:18:32.524 0 00:18:32.524 08:52:49 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2054054 00:18:32.524 08:52:49 -- common/autotest_common.sh@936 -- # '[' -z 2054054 ']' 00:18:32.524 08:52:49 -- common/autotest_common.sh@940 -- # kill -0 2054054 00:18:32.524 08:52:49 -- common/autotest_common.sh@941 -- # uname 00:18:32.524 08:52:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:32.524 08:52:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2054054 00:18:32.524 08:52:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:32.524 08:52:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:32.524 08:52:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2054054' 00:18:32.524 killing process with pid 2054054 00:18:32.524 08:52:49 -- common/autotest_common.sh@955 -- # kill 2054054 00:18:32.524 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.524 00:18:32.524 Latency(us) 00:18:32.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.524 =================================================================================================================== 00:18:32.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.524 08:52:49 -- common/autotest_common.sh@960 -- # wait 2054054 00:18:32.783 08:52:49 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:33.042 08:52:50 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:33.042 08:52:50 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:33.301 08:52:50 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:33.301 08:52:50 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:33.301 08:52:50 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2050738 00:18:33.301 08:52:50 -- target/nvmf_lvs_grow.sh@74 -- # wait 2050738 00:18:33.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2050738 Killed "${NVMF_APP[@]}" "$@" 00:18:33.301 08:52:50 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:33.301 08:52:50 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:33.301 08:52:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:33.301 08:52:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:33.301 08:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:33.301 08:52:50 -- nvmf/common.sh@470 -- # nvmfpid=2055999 00:18:33.301 08:52:50 -- nvmf/common.sh@471 -- # waitforlisten 2055999 00:18:33.301 08:52:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:33.301 08:52:50 -- common/autotest_common.sh@817 -- # '[' -z 2055999 ']' 00:18:33.301 08:52:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.301 08:52:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:33.301 08:52:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.301 08:52:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:33.301 08:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:33.301 [2024-04-26 08:52:50.427409] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:18:33.301 [2024-04-26 08:52:50.427467] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.301 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.301 [2024-04-26 08:52:50.503649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.560 [2024-04-26 08:52:50.575102] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.560 [2024-04-26 08:52:50.575135] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.560 [2024-04-26 08:52:50.575145] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.560 [2024-04-26 08:52:50.575153] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.560 [2024-04-26 08:52:50.575161] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.560 [2024-04-26 08:52:50.575180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.128 08:52:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:34.128 08:52:51 -- common/autotest_common.sh@850 -- # return 0 00:18:34.128 08:52:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:34.128 08:52:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:34.128 08:52:51 -- common/autotest_common.sh@10 -- # set +x 00:18:34.128 08:52:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.128 08:52:51 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:34.387 [2024-04-26 08:52:51.412521] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:34.387 [2024-04-26 08:52:51.412602] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:34.387 [2024-04-26 08:52:51.412629] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:34.387 08:52:51 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:34.387 08:52:51 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 3da89a31-bf6c-419f-83fc-a1a5a4e97a81 00:18:34.387 08:52:51 -- common/autotest_common.sh@885 -- # local bdev_name=3da89a31-bf6c-419f-83fc-a1a5a4e97a81 00:18:34.387 08:52:51 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:34.387 08:52:51 -- common/autotest_common.sh@887 -- # local i 00:18:34.387 08:52:51 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:34.387 08:52:51 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:34.387 08:52:51 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:34.387 08:52:51 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3da89a31-bf6c-419f-83fc-a1a5a4e97a81 -t 2000 00:18:34.647 [ 00:18:34.647 { 00:18:34.647 "name": "3da89a31-bf6c-419f-83fc-a1a5a4e97a81", 00:18:34.647 "aliases": [ 00:18:34.647 "lvs/lvol" 00:18:34.647 ], 00:18:34.647 "product_name": "Logical Volume", 00:18:34.647 "block_size": 4096, 00:18:34.647 "num_blocks": 38912, 00:18:34.647 "uuid": "3da89a31-bf6c-419f-83fc-a1a5a4e97a81", 00:18:34.647 "assigned_rate_limits": { 00:18:34.647 "rw_ios_per_sec": 0, 00:18:34.647 "rw_mbytes_per_sec": 0, 00:18:34.647 "r_mbytes_per_sec": 0, 00:18:34.647 "w_mbytes_per_sec": 0 00:18:34.647 }, 00:18:34.647 "claimed": false, 00:18:34.647 "zoned": false, 00:18:34.647 "supported_io_types": { 00:18:34.647 "read": true, 00:18:34.647 "write": true, 00:18:34.647 "unmap": true, 00:18:34.647 "write_zeroes": true, 00:18:34.647 "flush": false, 00:18:34.647 "reset": true, 00:18:34.647 "compare": false, 00:18:34.647 "compare_and_write": false, 00:18:34.647 "abort": false, 00:18:34.647 "nvme_admin": false, 00:18:34.647 "nvme_io": false 00:18:34.647 }, 00:18:34.647 "driver_specific": { 00:18:34.647 "lvol": { 00:18:34.647 "lvol_store_uuid": "085ef084-0598-41a6-89bf-7f8792921a24", 00:18:34.647 "base_bdev": "aio_bdev", 00:18:34.647 "thin_provision": false, 00:18:34.647 "snapshot": false, 00:18:34.647 "clone": false, 00:18:34.647 "esnap_clone": false 00:18:34.647 } 00:18:34.647 } 00:18:34.647 } 00:18:34.647 ] 00:18:34.647 08:52:51 -- common/autotest_common.sh@893 -- # return 0 00:18:34.647 08:52:51 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:34.647 08:52:51 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:34.907 08:52:51 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:34.907 08:52:51 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:34.907 08:52:51 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:34.907 08:52:52 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:34.907 08:52:52 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:35.167 [2024-04-26 08:52:52.284881] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:35.167 08:52:52 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:35.167 08:52:52 -- common/autotest_common.sh@638 -- # local es=0 00:18:35.167 08:52:52 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:35.167 08:52:52 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.167 08:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:35.167 08:52:52 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.167 08:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:35.167 08:52:52 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.167 08:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:35.167 08:52:52 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.167 08:52:52 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:35.167 08:52:52 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:35.427 request: 00:18:35.427 { 00:18:35.427 "uuid": "085ef084-0598-41a6-89bf-7f8792921a24", 00:18:35.427 "method": "bdev_lvol_get_lvstores", 00:18:35.427 "req_id": 1 00:18:35.427 } 00:18:35.427 Got JSON-RPC error response 00:18:35.427 response: 00:18:35.427 { 00:18:35.427 "code": -19, 00:18:35.427 "message": "No such device" 00:18:35.427 } 00:18:35.427 08:52:52 -- common/autotest_common.sh@641 -- # es=1 00:18:35.427 08:52:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:35.427 08:52:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:35.427 08:52:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:35.427 08:52:52 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:35.427 aio_bdev 00:18:35.427 08:52:52 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 3da89a31-bf6c-419f-83fc-a1a5a4e97a81 00:18:35.427 08:52:52 -- common/autotest_common.sh@885 -- # local bdev_name=3da89a31-bf6c-419f-83fc-a1a5a4e97a81 00:18:35.427 08:52:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:35.427 08:52:52 -- common/autotest_common.sh@887 -- # local i 00:18:35.427 08:52:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:35.427 08:52:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:35.427 08:52:52 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:35.686 08:52:52 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3da89a31-bf6c-419f-83fc-a1a5a4e97a81 -t 2000 00:18:35.946 [ 00:18:35.946 { 00:18:35.946 "name": "3da89a31-bf6c-419f-83fc-a1a5a4e97a81", 00:18:35.946 "aliases": [ 00:18:35.946 "lvs/lvol" 00:18:35.946 ], 00:18:35.946 "product_name": "Logical Volume", 00:18:35.946 "block_size": 4096, 00:18:35.946 "num_blocks": 38912, 00:18:35.946 "uuid": "3da89a31-bf6c-419f-83fc-a1a5a4e97a81", 00:18:35.946 "assigned_rate_limits": { 00:18:35.946 "rw_ios_per_sec": 0, 00:18:35.946 "rw_mbytes_per_sec": 0, 00:18:35.946 "r_mbytes_per_sec": 0, 00:18:35.946 "w_mbytes_per_sec": 0 00:18:35.946 }, 00:18:35.946 "claimed": false, 00:18:35.946 "zoned": false, 00:18:35.946 "supported_io_types": { 00:18:35.946 "read": true, 00:18:35.946 "write": true, 00:18:35.946 "unmap": true, 00:18:35.946 "write_zeroes": true, 00:18:35.946 "flush": false, 00:18:35.946 "reset": true, 00:18:35.946 "compare": false, 00:18:35.946 "compare_and_write": false, 00:18:35.946 "abort": false, 00:18:35.946 "nvme_admin": false, 00:18:35.946 "nvme_io": false 00:18:35.946 }, 00:18:35.946 "driver_specific": { 00:18:35.946 "lvol": { 00:18:35.946 "lvol_store_uuid": "085ef084-0598-41a6-89bf-7f8792921a24", 00:18:35.946 "base_bdev": "aio_bdev", 00:18:35.946 "thin_provision": false, 00:18:35.946 "snapshot": false, 00:18:35.946 "clone": false, 00:18:35.946 "esnap_clone": false 00:18:35.946 } 00:18:35.946 } 00:18:35.946 } 00:18:35.946 ] 00:18:35.946 08:52:53 -- common/autotest_common.sh@893 -- # return 0 00:18:35.946 08:52:53 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:35.946 08:52:53 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:36.205 08:52:53 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:36.205 08:52:53 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:36.205 08:52:53 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:36.205 08:52:53 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:36.205 08:52:53 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3da89a31-bf6c-419f-83fc-a1a5a4e97a81 00:18:36.465 08:52:53 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 085ef084-0598-41a6-89bf-7f8792921a24 00:18:36.725 08:52:53 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:36.725 08:52:53 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:36.725 00:18:36.725 real 0m17.433s 00:18:36.725 user 0m43.421s 00:18:36.725 sys 0m5.144s 00:18:36.725 08:52:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:36.725 08:52:53 -- common/autotest_common.sh@10 -- # set +x 00:18:36.725 ************************************ 00:18:36.725 END TEST lvs_grow_dirty 00:18:36.725 ************************************ 00:18:36.725 08:52:53 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:36.725 08:52:53 -- common/autotest_common.sh@794 -- # type=--id 00:18:36.725 08:52:53 -- common/autotest_common.sh@795 -- # id=0 00:18:36.725 08:52:53 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:36.725 08:52:53 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:36.984 08:52:53 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:36.984 08:52:53 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:36.984 08:52:53 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:36.984 08:52:53 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:36.984 nvmf_trace.0 00:18:36.984 08:52:54 -- common/autotest_common.sh@809 -- # return 0 00:18:36.984 08:52:54 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:36.984 08:52:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:36.984 08:52:54 -- nvmf/common.sh@117 -- # sync 00:18:36.984 08:52:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.984 08:52:54 -- nvmf/common.sh@120 -- # set +e 00:18:36.984 08:52:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.984 08:52:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.984 rmmod nvme_tcp 00:18:36.984 rmmod nvme_fabrics 00:18:36.984 rmmod nvme_keyring 00:18:36.984 08:52:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.984 08:52:54 -- nvmf/common.sh@124 -- # set -e 00:18:36.984 08:52:54 -- nvmf/common.sh@125 -- # return 0 00:18:36.984 08:52:54 -- nvmf/common.sh@478 -- # '[' -n 2055999 ']' 00:18:36.984 08:52:54 -- nvmf/common.sh@479 -- # killprocess 2055999 00:18:36.984 08:52:54 -- common/autotest_common.sh@936 -- # '[' -z 2055999 ']' 00:18:36.984 08:52:54 -- common/autotest_common.sh@940 -- # kill -0 2055999 00:18:36.984 08:52:54 -- common/autotest_common.sh@941 -- # uname 00:18:36.984 08:52:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.984 08:52:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2055999 00:18:36.984 08:52:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:36.984 08:52:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:36.984 08:52:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2055999' 00:18:36.984 killing process with pid 2055999 00:18:36.984 08:52:54 -- common/autotest_common.sh@955 -- # kill 2055999 00:18:36.984 08:52:54 -- common/autotest_common.sh@960 -- # wait 2055999 00:18:37.245 08:52:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:37.245 08:52:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:37.245 08:52:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:37.245 08:52:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.245 08:52:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.245 08:52:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.245 08:52:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.245 08:52:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.787 08:52:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:39.787 00:18:39.787 real 0m43.789s 00:18:39.787 user 1m4.226s 00:18:39.787 sys 0m12.957s 00:18:39.787 08:52:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:39.787 08:52:56 -- common/autotest_common.sh@10 -- # set +x 00:18:39.787 ************************************ 00:18:39.787 END TEST nvmf_lvs_grow 00:18:39.787 ************************************ 00:18:39.787 08:52:56 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:39.787 08:52:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:39.787 08:52:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.787 08:52:56 -- common/autotest_common.sh@10 -- # set +x 00:18:39.787 ************************************ 00:18:39.787 START TEST nvmf_bdev_io_wait 00:18:39.787 ************************************ 00:18:39.787 08:52:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:39.787 * Looking for test storage... 00:18:39.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.787 08:52:56 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.787 08:52:56 -- nvmf/common.sh@7 -- # uname -s 00:18:39.787 08:52:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.787 08:52:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.787 08:52:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.787 08:52:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.787 08:52:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.787 08:52:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.787 08:52:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.787 08:52:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.787 08:52:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.787 08:52:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.787 08:52:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:39.787 08:52:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:39.787 08:52:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.787 08:52:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.787 08:52:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.787 08:52:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.787 08:52:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.787 08:52:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.787 08:52:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.787 08:52:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.787 08:52:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.787 08:52:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.787 08:52:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.787 08:52:56 -- paths/export.sh@5 -- # export PATH 00:18:39.787 08:52:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.787 08:52:56 -- nvmf/common.sh@47 -- # : 0 00:18:39.787 08:52:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.787 08:52:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.787 08:52:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.787 08:52:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.787 08:52:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.787 08:52:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.787 08:52:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.787 08:52:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.787 08:52:56 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:39.787 08:52:56 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:39.787 08:52:56 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:39.787 08:52:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:39.787 08:52:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.787 08:52:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:39.787 08:52:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:39.787 08:52:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:39.787 08:52:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.787 08:52:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.787 08:52:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.787 08:52:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:39.787 08:52:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:39.787 08:52:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:39.787 08:52:56 -- common/autotest_common.sh@10 -- # set +x 00:18:46.364 08:53:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:46.364 08:53:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:46.364 08:53:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:46.364 08:53:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:46.364 08:53:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:46.364 08:53:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:46.364 08:53:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:46.364 08:53:03 -- nvmf/common.sh@295 -- # net_devs=() 00:18:46.364 08:53:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:46.364 08:53:03 -- nvmf/common.sh@296 -- # e810=() 00:18:46.364 08:53:03 -- nvmf/common.sh@296 -- # local -ga e810 00:18:46.364 08:53:03 -- nvmf/common.sh@297 -- # x722=() 00:18:46.364 08:53:03 -- nvmf/common.sh@297 -- # local -ga x722 00:18:46.364 08:53:03 -- nvmf/common.sh@298 -- # mlx=() 00:18:46.364 08:53:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:46.364 08:53:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.364 08:53:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.364 08:53:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.364 08:53:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.364 08:53:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.364 08:53:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.364 08:53:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.364 08:53:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.364 08:53:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.364 08:53:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.364 08:53:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.364 08:53:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:46.364 08:53:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:46.364 08:53:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:46.364 08:53:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.364 08:53:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:46.364 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:46.364 08:53:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:46.364 08:53:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:46.364 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:46.364 08:53:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:46.364 08:53:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.364 08:53:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.364 08:53:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:46.364 08:53:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.364 08:53:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:46.364 Found net devices under 0000:af:00.0: cvl_0_0 00:18:46.364 08:53:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.364 08:53:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:46.364 08:53:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.364 08:53:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:46.364 08:53:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.364 08:53:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:46.364 Found net devices under 0000:af:00.1: cvl_0_1 00:18:46.364 08:53:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.364 08:53:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:46.364 08:53:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:46.364 08:53:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:46.364 08:53:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:46.364 08:53:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.364 08:53:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.364 08:53:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.364 08:53:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:46.364 08:53:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.364 08:53:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.364 08:53:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:46.364 08:53:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.364 08:53:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.364 08:53:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:46.364 08:53:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:46.364 08:53:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.364 08:53:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.364 08:53:03 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.364 08:53:03 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.364 08:53:03 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:46.364 08:53:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.625 08:53:03 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.625 08:53:03 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.625 08:53:03 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:46.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:18:46.625 00:18:46.625 --- 10.0.0.2 ping statistics --- 00:18:46.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.625 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:18:46.625 08:53:03 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:18:46.625 00:18:46.625 --- 10.0.0.1 ping statistics --- 00:18:46.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.625 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:18:46.625 08:53:03 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.625 08:53:03 -- nvmf/common.sh@411 -- # return 0 00:18:46.625 08:53:03 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:46.625 08:53:03 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.625 08:53:03 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:46.625 08:53:03 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:46.625 08:53:03 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.625 08:53:03 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:46.625 08:53:03 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:46.625 08:53:03 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:46.625 08:53:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:46.625 08:53:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:46.625 08:53:03 -- common/autotest_common.sh@10 -- # set +x 00:18:46.625 08:53:03 -- nvmf/common.sh@470 -- # nvmfpid=2060515 00:18:46.625 08:53:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:46.625 08:53:03 -- nvmf/common.sh@471 -- # waitforlisten 2060515 00:18:46.625 08:53:03 -- common/autotest_common.sh@817 -- # '[' -z 2060515 ']' 00:18:46.625 08:53:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.625 08:53:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:46.625 08:53:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.625 08:53:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:46.625 08:53:03 -- common/autotest_common.sh@10 -- # set +x 00:18:46.625 [2024-04-26 08:53:03.763416] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:18:46.625 [2024-04-26 08:53:03.763468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.625 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.625 [2024-04-26 08:53:03.838501] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.885 [2024-04-26 08:53:03.908140] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.885 [2024-04-26 08:53:03.908188] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.885 [2024-04-26 08:53:03.908197] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.885 [2024-04-26 08:53:03.908205] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.885 [2024-04-26 08:53:03.908212] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.885 [2024-04-26 08:53:03.908261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.885 [2024-04-26 08:53:03.908345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.885 [2024-04-26 08:53:03.908430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.885 [2024-04-26 08:53:03.908432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.454 08:53:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:47.454 08:53:04 -- common/autotest_common.sh@850 -- # return 0 00:18:47.454 08:53:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:47.454 08:53:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:47.454 08:53:04 -- common/autotest_common.sh@10 -- # set +x 00:18:47.454 08:53:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.454 08:53:04 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:47.454 08:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.454 08:53:04 -- common/autotest_common.sh@10 -- # set +x 00:18:47.454 08:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.454 08:53:04 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:47.454 08:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.454 08:53:04 -- common/autotest_common.sh@10 -- # set +x 00:18:47.454 08:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.454 08:53:04 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:47.454 08:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.454 08:53:04 -- common/autotest_common.sh@10 -- # set +x 00:18:47.454 [2024-04-26 08:53:04.678513] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.454 08:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.454 08:53:04 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:47.454 08:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.454 08:53:04 -- common/autotest_common.sh@10 -- # set +x 00:18:47.714 Malloc0 00:18:47.714 08:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:47.714 08:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.714 08:53:04 -- common/autotest_common.sh@10 -- # set +x 00:18:47.714 08:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:47.714 08:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.714 08:53:04 -- common/autotest_common.sh@10 -- # set +x 00:18:47.714 08:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.714 08:53:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.714 08:53:04 -- common/autotest_common.sh@10 -- # set +x 00:18:47.714 [2024-04-26 08:53:04.747146] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.714 08:53:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2060689 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@30 -- # READ_PID=2060692 00:18:47.714 08:53:04 -- nvmf/common.sh@521 -- # config=() 00:18:47.714 08:53:04 -- nvmf/common.sh@521 -- # local subsystem config 00:18:47.714 08:53:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:47.714 08:53:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:47.714 { 00:18:47.714 "params": { 00:18:47.714 "name": "Nvme$subsystem", 00:18:47.714 "trtype": "$TEST_TRANSPORT", 00:18:47.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.714 "adrfam": "ipv4", 00:18:47.714 "trsvcid": "$NVMF_PORT", 00:18:47.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.714 "hdgst": ${hdgst:-false}, 00:18:47.714 "ddgst": ${ddgst:-false} 00:18:47.714 }, 00:18:47.714 "method": "bdev_nvme_attach_controller" 00:18:47.714 } 00:18:47.714 EOF 00:18:47.714 )") 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2060695 00:18:47.714 08:53:04 -- nvmf/common.sh@521 -- # config=() 00:18:47.714 08:53:04 -- nvmf/common.sh@521 -- # local subsystem config 00:18:47.714 08:53:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:47.714 08:53:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:47.714 { 00:18:47.714 "params": { 00:18:47.714 "name": "Nvme$subsystem", 00:18:47.714 "trtype": "$TEST_TRANSPORT", 00:18:47.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.714 "adrfam": "ipv4", 00:18:47.714 "trsvcid": "$NVMF_PORT", 00:18:47.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.714 "hdgst": ${hdgst:-false}, 00:18:47.714 "ddgst": ${ddgst:-false} 00:18:47.714 }, 00:18:47.714 "method": "bdev_nvme_attach_controller" 00:18:47.714 } 00:18:47.714 EOF 00:18:47.714 )") 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:47.714 08:53:04 -- nvmf/common.sh@543 -- # cat 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2060699 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@35 -- # sync 00:18:47.714 08:53:04 -- nvmf/common.sh@521 -- # config=() 00:18:47.714 08:53:04 -- nvmf/common.sh@521 -- # local subsystem config 00:18:47.714 08:53:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:47.714 08:53:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:47.714 { 00:18:47.714 "params": { 00:18:47.714 "name": "Nvme$subsystem", 00:18:47.714 "trtype": "$TEST_TRANSPORT", 00:18:47.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.714 "adrfam": "ipv4", 00:18:47.714 "trsvcid": "$NVMF_PORT", 00:18:47.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.714 "hdgst": ${hdgst:-false}, 00:18:47.714 "ddgst": ${ddgst:-false} 00:18:47.714 }, 00:18:47.714 "method": "bdev_nvme_attach_controller" 00:18:47.714 } 00:18:47.714 EOF 00:18:47.714 )") 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:47.714 08:53:04 -- nvmf/common.sh@543 -- # cat 00:18:47.714 08:53:04 -- nvmf/common.sh@521 -- # config=() 00:18:47.714 08:53:04 -- nvmf/common.sh@521 -- # local subsystem config 00:18:47.714 08:53:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:47.714 08:53:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:47.714 { 00:18:47.714 "params": { 00:18:47.714 "name": "Nvme$subsystem", 00:18:47.714 "trtype": "$TEST_TRANSPORT", 00:18:47.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.714 "adrfam": "ipv4", 00:18:47.714 "trsvcid": "$NVMF_PORT", 00:18:47.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.714 "hdgst": ${hdgst:-false}, 00:18:47.714 "ddgst": ${ddgst:-false} 00:18:47.714 }, 00:18:47.714 "method": "bdev_nvme_attach_controller" 00:18:47.714 } 00:18:47.714 EOF 00:18:47.714 )") 00:18:47.714 08:53:04 -- nvmf/common.sh@545 -- # jq . 00:18:47.714 08:53:04 -- nvmf/common.sh@543 -- # cat 00:18:47.714 08:53:04 -- target/bdev_io_wait.sh@37 -- # wait 2060689 00:18:47.714 08:53:04 -- nvmf/common.sh@543 -- # cat 00:18:47.714 08:53:04 -- nvmf/common.sh@546 -- # IFS=, 00:18:47.714 08:53:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:47.714 "params": { 00:18:47.714 "name": "Nvme1", 00:18:47.714 "trtype": "tcp", 00:18:47.714 "traddr": "10.0.0.2", 00:18:47.714 "adrfam": "ipv4", 00:18:47.714 "trsvcid": "4420", 00:18:47.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.714 "hdgst": false, 00:18:47.714 "ddgst": false 00:18:47.714 }, 00:18:47.714 "method": "bdev_nvme_attach_controller" 00:18:47.714 }' 00:18:47.714 08:53:04 -- nvmf/common.sh@545 -- # jq . 00:18:47.714 08:53:04 -- nvmf/common.sh@545 -- # jq . 00:18:47.714 08:53:04 -- nvmf/common.sh@545 -- # jq . 00:18:47.714 08:53:04 -- nvmf/common.sh@546 -- # IFS=, 00:18:47.714 08:53:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:47.714 "params": { 00:18:47.714 "name": "Nvme1", 00:18:47.714 "trtype": "tcp", 00:18:47.714 "traddr": "10.0.0.2", 00:18:47.714 "adrfam": "ipv4", 00:18:47.714 "trsvcid": "4420", 00:18:47.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.714 "hdgst": false, 00:18:47.714 "ddgst": false 00:18:47.714 }, 00:18:47.714 "method": "bdev_nvme_attach_controller" 00:18:47.714 }' 00:18:47.714 08:53:04 -- nvmf/common.sh@546 -- # IFS=, 00:18:47.714 08:53:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:47.714 "params": { 00:18:47.714 "name": "Nvme1", 00:18:47.714 "trtype": "tcp", 00:18:47.714 "traddr": "10.0.0.2", 00:18:47.714 "adrfam": "ipv4", 00:18:47.714 "trsvcid": "4420", 00:18:47.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.714 "hdgst": false, 00:18:47.714 "ddgst": false 00:18:47.714 }, 00:18:47.714 "method": "bdev_nvme_attach_controller" 00:18:47.714 }' 00:18:47.714 08:53:04 -- nvmf/common.sh@546 -- # IFS=, 00:18:47.714 08:53:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:47.714 "params": { 00:18:47.714 "name": "Nvme1", 00:18:47.714 "trtype": "tcp", 00:18:47.714 "traddr": "10.0.0.2", 00:18:47.714 "adrfam": "ipv4", 00:18:47.714 "trsvcid": "4420", 00:18:47.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.714 "hdgst": false, 00:18:47.714 "ddgst": false 00:18:47.714 }, 00:18:47.714 "method": "bdev_nvme_attach_controller" 00:18:47.714 }' 00:18:47.714 [2024-04-26 08:53:04.798607] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:18:47.714 [2024-04-26 08:53:04.798663] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:47.714 [2024-04-26 08:53:04.798932] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:18:47.714 [2024-04-26 08:53:04.798979] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:47.714 [2024-04-26 08:53:04.800133] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:18:47.714 [2024-04-26 08:53:04.800179] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:47.714 [2024-04-26 08:53:04.805576] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:18:47.714 [2024-04-26 08:53:04.805622] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:47.714 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.714 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.974 [2024-04-26 08:53:04.992351] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.974 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.974 [2024-04-26 08:53:05.064319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:47.974 [2024-04-26 08:53:05.080557] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.974 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.974 [2024-04-26 08:53:05.135882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.974 [2024-04-26 08:53:05.174151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:47.974 [2024-04-26 08:53:05.187107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.974 [2024-04-26 08:53:05.209136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:48.233 [2024-04-26 08:53:05.259464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:48.233 Running I/O for 1 seconds... 00:18:48.233 Running I/O for 1 seconds... 00:18:48.492 Running I/O for 1 seconds... 00:18:48.492 Running I/O for 1 seconds... 00:18:49.059 00:18:49.059 Latency(us) 00:18:49.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.059 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:49.059 Nvme1n1 : 1.01 11418.23 44.60 0.00 0.00 11141.79 4849.66 33764.15 00:18:49.059 =================================================================================================================== 00:18:49.059 Total : 11418.23 44.60 0.00 0.00 11141.79 4849.66 33764.15 00:18:49.318 00:18:49.318 Latency(us) 00:18:49.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.318 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:49.318 Nvme1n1 : 1.00 263817.48 1030.54 0.00 0.00 482.97 199.07 635.70 00:18:49.318 =================================================================================================================== 00:18:49.318 Total : 263817.48 1030.54 0.00 0.00 482.97 199.07 635.70 00:18:49.318 08:53:06 -- target/bdev_io_wait.sh@38 -- # wait 2060692 00:18:49.318 00:18:49.318 Latency(us) 00:18:49.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.318 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:49.318 Nvme1n1 : 1.01 14083.96 55.02 0.00 0.00 9061.61 5242.88 20971.52 00:18:49.318 =================================================================================================================== 00:18:49.318 Total : 14083.96 55.02 0.00 0.00 9061.61 5242.88 20971.52 00:18:49.318 00:18:49.318 Latency(us) 00:18:49.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.318 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:49.318 Nvme1n1 : 1.01 6525.99 25.49 0.00 0.00 19516.97 4928.31 53267.66 00:18:49.318 =================================================================================================================== 00:18:49.318 Total : 6525.99 25.49 0.00 0.00 19516.97 4928.31 53267.66 00:18:49.577 08:53:06 -- target/bdev_io_wait.sh@39 -- # wait 2060695 00:18:49.577 08:53:06 -- target/bdev_io_wait.sh@40 -- # wait 2060699 00:18:49.577 08:53:06 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.577 08:53:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.577 08:53:06 -- common/autotest_common.sh@10 -- # set +x 00:18:49.837 08:53:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.837 08:53:06 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:49.837 08:53:06 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:49.837 08:53:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:49.837 08:53:06 -- nvmf/common.sh@117 -- # sync 00:18:49.837 08:53:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.837 08:53:06 -- nvmf/common.sh@120 -- # set +e 00:18:49.837 08:53:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.837 08:53:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.837 rmmod nvme_tcp 00:18:49.837 rmmod nvme_fabrics 00:18:49.837 rmmod nvme_keyring 00:18:49.837 08:53:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.837 08:53:06 -- nvmf/common.sh@124 -- # set -e 00:18:49.837 08:53:06 -- nvmf/common.sh@125 -- # return 0 00:18:49.837 08:53:06 -- nvmf/common.sh@478 -- # '[' -n 2060515 ']' 00:18:49.837 08:53:06 -- nvmf/common.sh@479 -- # killprocess 2060515 00:18:49.837 08:53:06 -- common/autotest_common.sh@936 -- # '[' -z 2060515 ']' 00:18:49.837 08:53:06 -- common/autotest_common.sh@940 -- # kill -0 2060515 00:18:49.837 08:53:06 -- common/autotest_common.sh@941 -- # uname 00:18:49.837 08:53:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:49.837 08:53:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2060515 00:18:49.837 08:53:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:49.837 08:53:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:49.837 08:53:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2060515' 00:18:49.837 killing process with pid 2060515 00:18:49.837 08:53:06 -- common/autotest_common.sh@955 -- # kill 2060515 00:18:49.837 08:53:06 -- common/autotest_common.sh@960 -- # wait 2060515 00:18:50.097 08:53:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:50.097 08:53:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:50.097 08:53:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:50.097 08:53:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.097 08:53:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.097 08:53:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.097 08:53:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.097 08:53:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.006 08:53:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:52.006 00:18:52.006 real 0m12.651s 00:18:52.006 user 0m20.166s 00:18:52.006 sys 0m7.301s 00:18:52.006 08:53:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:52.006 08:53:09 -- common/autotest_common.sh@10 -- # set +x 00:18:52.006 ************************************ 00:18:52.006 END TEST nvmf_bdev_io_wait 00:18:52.006 ************************************ 00:18:52.265 08:53:09 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:52.265 08:53:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:52.265 08:53:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:52.265 08:53:09 -- common/autotest_common.sh@10 -- # set +x 00:18:52.265 ************************************ 00:18:52.265 START TEST nvmf_queue_depth 00:18:52.265 ************************************ 00:18:52.265 08:53:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:52.525 * Looking for test storage... 00:18:52.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.525 08:53:09 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.525 08:53:09 -- nvmf/common.sh@7 -- # uname -s 00:18:52.525 08:53:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.525 08:53:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.525 08:53:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.525 08:53:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.525 08:53:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.525 08:53:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.525 08:53:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.525 08:53:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.525 08:53:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.525 08:53:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.525 08:53:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:52.525 08:53:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:18:52.525 08:53:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.525 08:53:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.525 08:53:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.525 08:53:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.525 08:53:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.525 08:53:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.525 08:53:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.525 08:53:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.525 08:53:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.525 08:53:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.525 08:53:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.525 08:53:09 -- paths/export.sh@5 -- # export PATH 00:18:52.525 08:53:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.525 08:53:09 -- nvmf/common.sh@47 -- # : 0 00:18:52.525 08:53:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.525 08:53:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.525 08:53:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.525 08:53:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.525 08:53:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.525 08:53:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.525 08:53:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.525 08:53:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.525 08:53:09 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:52.525 08:53:09 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:52.525 08:53:09 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:52.525 08:53:09 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:52.525 08:53:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:52.525 08:53:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.525 08:53:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:52.525 08:53:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:52.525 08:53:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:52.525 08:53:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.525 08:53:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.525 08:53:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.525 08:53:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:52.525 08:53:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:52.525 08:53:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:52.525 08:53:09 -- common/autotest_common.sh@10 -- # set +x 00:18:59.156 08:53:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:59.156 08:53:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:59.156 08:53:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:59.156 08:53:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:59.156 08:53:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:59.156 08:53:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:59.156 08:53:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:59.156 08:53:16 -- nvmf/common.sh@295 -- # net_devs=() 00:18:59.156 08:53:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:59.156 08:53:16 -- nvmf/common.sh@296 -- # e810=() 00:18:59.156 08:53:16 -- nvmf/common.sh@296 -- # local -ga e810 00:18:59.156 08:53:16 -- nvmf/common.sh@297 -- # x722=() 00:18:59.156 08:53:16 -- nvmf/common.sh@297 -- # local -ga x722 00:18:59.156 08:53:16 -- nvmf/common.sh@298 -- # mlx=() 00:18:59.156 08:53:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:59.156 08:53:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.156 08:53:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.156 08:53:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.156 08:53:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.156 08:53:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.156 08:53:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.156 08:53:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.156 08:53:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.156 08:53:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.156 08:53:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.156 08:53:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.156 08:53:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:59.156 08:53:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:59.156 08:53:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:59.156 08:53:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.156 08:53:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:59.156 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:59.156 08:53:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.156 08:53:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:59.156 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:59.156 08:53:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.156 08:53:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.157 08:53:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:59.157 08:53:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:59.157 08:53:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:59.157 08:53:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.157 08:53:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.157 08:53:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:59.157 08:53:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.157 08:53:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:59.157 Found net devices under 0000:af:00.0: cvl_0_0 00:18:59.157 08:53:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.157 08:53:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.157 08:53:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.157 08:53:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:59.157 08:53:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.157 08:53:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:59.157 Found net devices under 0000:af:00.1: cvl_0_1 00:18:59.157 08:53:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.157 08:53:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:59.157 08:53:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:59.157 08:53:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:59.157 08:53:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:59.157 08:53:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:59.157 08:53:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.157 08:53:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.157 08:53:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.157 08:53:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:59.157 08:53:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.157 08:53:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.157 08:53:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:59.157 08:53:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.157 08:53:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.157 08:53:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:59.157 08:53:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:59.157 08:53:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.157 08:53:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.417 08:53:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.417 08:53:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.417 08:53:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:59.417 08:53:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.417 08:53:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.417 08:53:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.417 08:53:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:59.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:18:59.417 00:18:59.417 --- 10.0.0.2 ping statistics --- 00:18:59.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.417 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:18:59.417 08:53:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:18:59.417 00:18:59.417 --- 10.0.0.1 ping statistics --- 00:18:59.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.417 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:18:59.417 08:53:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.417 08:53:16 -- nvmf/common.sh@411 -- # return 0 00:18:59.417 08:53:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:59.417 08:53:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.417 08:53:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:59.417 08:53:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:59.417 08:53:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.417 08:53:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:59.417 08:53:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:59.417 08:53:16 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:59.417 08:53:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:59.417 08:53:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:59.417 08:53:16 -- common/autotest_common.sh@10 -- # set +x 00:18:59.417 08:53:16 -- nvmf/common.sh@470 -- # nvmfpid=2064811 00:18:59.417 08:53:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:59.417 08:53:16 -- nvmf/common.sh@471 -- # waitforlisten 2064811 00:18:59.417 08:53:16 -- common/autotest_common.sh@817 -- # '[' -z 2064811 ']' 00:18:59.417 08:53:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.417 08:53:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:59.417 08:53:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.417 08:53:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:59.417 08:53:16 -- common/autotest_common.sh@10 -- # set +x 00:18:59.677 [2024-04-26 08:53:16.667983] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:18:59.677 [2024-04-26 08:53:16.668034] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.677 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.677 [2024-04-26 08:53:16.742350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.677 [2024-04-26 08:53:16.812755] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.677 [2024-04-26 08:53:16.812795] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.677 [2024-04-26 08:53:16.812805] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.677 [2024-04-26 08:53:16.812813] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.677 [2024-04-26 08:53:16.812821] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.677 [2024-04-26 08:53:16.812842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.246 08:53:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:00.246 08:53:17 -- common/autotest_common.sh@850 -- # return 0 00:19:00.246 08:53:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:00.246 08:53:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:00.246 08:53:17 -- common/autotest_common.sh@10 -- # set +x 00:19:00.506 08:53:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.506 08:53:17 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:00.506 08:53:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.506 08:53:17 -- common/autotest_common.sh@10 -- # set +x 00:19:00.506 [2024-04-26 08:53:17.511599] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.506 08:53:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.506 08:53:17 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:00.506 08:53:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.506 08:53:17 -- common/autotest_common.sh@10 -- # set +x 00:19:00.506 Malloc0 00:19:00.506 08:53:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.506 08:53:17 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:00.506 08:53:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.506 08:53:17 -- common/autotest_common.sh@10 -- # set +x 00:19:00.506 08:53:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.506 08:53:17 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:00.506 08:53:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.506 08:53:17 -- common/autotest_common.sh@10 -- # set +x 00:19:00.506 08:53:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.506 08:53:17 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.506 08:53:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.506 08:53:17 -- common/autotest_common.sh@10 -- # set +x 00:19:00.507 [2024-04-26 08:53:17.576346] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.507 08:53:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.507 08:53:17 -- target/queue_depth.sh@30 -- # bdevperf_pid=2065087 00:19:00.507 08:53:17 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:00.507 08:53:17 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:00.507 08:53:17 -- target/queue_depth.sh@33 -- # waitforlisten 2065087 /var/tmp/bdevperf.sock 00:19:00.507 08:53:17 -- common/autotest_common.sh@817 -- # '[' -z 2065087 ']' 00:19:00.507 08:53:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.507 08:53:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:00.507 08:53:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.507 08:53:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:00.507 08:53:17 -- common/autotest_common.sh@10 -- # set +x 00:19:00.507 [2024-04-26 08:53:17.627104] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:19:00.507 [2024-04-26 08:53:17.627150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065087 ] 00:19:00.507 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.507 [2024-04-26 08:53:17.694763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.766 [2024-04-26 08:53:17.762477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.335 08:53:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:01.335 08:53:18 -- common/autotest_common.sh@850 -- # return 0 00:19:01.335 08:53:18 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:01.335 08:53:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.335 08:53:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.595 NVMe0n1 00:19:01.595 08:53:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.595 08:53:18 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:01.595 Running I/O for 10 seconds... 00:19:11.577 00:19:11.577 Latency(us) 00:19:11.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.577 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:11.577 Verification LBA range: start 0x0 length 0x4000 00:19:11.577 NVMe0n1 : 10.05 12649.26 49.41 0.00 0.00 80664.28 16357.79 67108.86 00:19:11.577 =================================================================================================================== 00:19:11.577 Total : 12649.26 49.41 0.00 0.00 80664.28 16357.79 67108.86 00:19:11.577 0 00:19:11.577 08:53:28 -- target/queue_depth.sh@39 -- # killprocess 2065087 00:19:11.577 08:53:28 -- common/autotest_common.sh@936 -- # '[' -z 2065087 ']' 00:19:11.577 08:53:28 -- common/autotest_common.sh@940 -- # kill -0 2065087 00:19:11.577 08:53:28 -- common/autotest_common.sh@941 -- # uname 00:19:11.577 08:53:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:11.577 08:53:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2065087 00:19:11.837 08:53:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:11.837 08:53:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:11.837 08:53:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2065087' 00:19:11.837 killing process with pid 2065087 00:19:11.837 08:53:28 -- common/autotest_common.sh@955 -- # kill 2065087 00:19:11.837 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.837 00:19:11.837 Latency(us) 00:19:11.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.837 =================================================================================================================== 00:19:11.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:11.837 08:53:28 -- common/autotest_common.sh@960 -- # wait 2065087 00:19:11.837 08:53:29 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:11.837 08:53:29 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:11.837 08:53:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:11.837 08:53:29 -- nvmf/common.sh@117 -- # sync 00:19:11.837 08:53:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:11.837 08:53:29 -- nvmf/common.sh@120 -- # set +e 00:19:11.837 08:53:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:11.837 08:53:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:11.837 rmmod nvme_tcp 00:19:12.098 rmmod nvme_fabrics 00:19:12.098 rmmod nvme_keyring 00:19:12.098 08:53:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:12.098 08:53:29 -- nvmf/common.sh@124 -- # set -e 00:19:12.098 08:53:29 -- nvmf/common.sh@125 -- # return 0 00:19:12.098 08:53:29 -- nvmf/common.sh@478 -- # '[' -n 2064811 ']' 00:19:12.098 08:53:29 -- nvmf/common.sh@479 -- # killprocess 2064811 00:19:12.098 08:53:29 -- common/autotest_common.sh@936 -- # '[' -z 2064811 ']' 00:19:12.098 08:53:29 -- common/autotest_common.sh@940 -- # kill -0 2064811 00:19:12.098 08:53:29 -- common/autotest_common.sh@941 -- # uname 00:19:12.098 08:53:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:12.098 08:53:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2064811 00:19:12.098 08:53:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:12.098 08:53:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:12.098 08:53:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2064811' 00:19:12.098 killing process with pid 2064811 00:19:12.098 08:53:29 -- common/autotest_common.sh@955 -- # kill 2064811 00:19:12.098 08:53:29 -- common/autotest_common.sh@960 -- # wait 2064811 00:19:12.357 08:53:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:12.357 08:53:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:12.357 08:53:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:12.357 08:53:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:12.357 08:53:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:12.357 08:53:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.357 08:53:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.357 08:53:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.266 08:53:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:14.266 00:19:14.266 real 0m22.035s 00:19:14.266 user 0m25.050s 00:19:14.266 sys 0m7.367s 00:19:14.266 08:53:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:14.266 08:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:14.266 ************************************ 00:19:14.266 END TEST nvmf_queue_depth 00:19:14.266 ************************************ 00:19:14.526 08:53:31 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:14.526 08:53:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:14.526 08:53:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:14.526 08:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:14.526 ************************************ 00:19:14.526 START TEST nvmf_multipath 00:19:14.526 ************************************ 00:19:14.526 08:53:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:14.792 * Looking for test storage... 00:19:14.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:14.792 08:53:31 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.792 08:53:31 -- nvmf/common.sh@7 -- # uname -s 00:19:14.792 08:53:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.792 08:53:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.792 08:53:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.792 08:53:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.792 08:53:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.792 08:53:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.792 08:53:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.792 08:53:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.792 08:53:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.792 08:53:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.792 08:53:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:14.792 08:53:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:14.792 08:53:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.792 08:53:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.792 08:53:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.792 08:53:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.792 08:53:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.792 08:53:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.792 08:53:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.792 08:53:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.793 08:53:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.793 08:53:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.793 08:53:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.793 08:53:31 -- paths/export.sh@5 -- # export PATH 00:19:14.793 08:53:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.793 08:53:31 -- nvmf/common.sh@47 -- # : 0 00:19:14.793 08:53:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:14.793 08:53:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:14.793 08:53:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.793 08:53:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.793 08:53:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.793 08:53:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:14.793 08:53:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:14.793 08:53:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:14.793 08:53:31 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:14.793 08:53:31 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:14.793 08:53:31 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:14.793 08:53:31 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:14.793 08:53:31 -- target/multipath.sh@43 -- # nvmftestinit 00:19:14.793 08:53:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:14.793 08:53:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.793 08:53:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:14.793 08:53:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:14.793 08:53:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:14.793 08:53:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.793 08:53:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.793 08:53:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.793 08:53:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:14.793 08:53:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:14.793 08:53:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:14.793 08:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:21.395 08:53:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:21.395 08:53:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:21.395 08:53:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:21.395 08:53:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:21.395 08:53:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:21.395 08:53:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:21.395 08:53:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:21.395 08:53:38 -- nvmf/common.sh@295 -- # net_devs=() 00:19:21.395 08:53:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:21.395 08:53:38 -- nvmf/common.sh@296 -- # e810=() 00:19:21.395 08:53:38 -- nvmf/common.sh@296 -- # local -ga e810 00:19:21.395 08:53:38 -- nvmf/common.sh@297 -- # x722=() 00:19:21.395 08:53:38 -- nvmf/common.sh@297 -- # local -ga x722 00:19:21.395 08:53:38 -- nvmf/common.sh@298 -- # mlx=() 00:19:21.395 08:53:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:21.395 08:53:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.395 08:53:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.395 08:53:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.395 08:53:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.395 08:53:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.395 08:53:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.395 08:53:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.395 08:53:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.395 08:53:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.395 08:53:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.395 08:53:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.395 08:53:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:21.395 08:53:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:21.395 08:53:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:21.395 08:53:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.395 08:53:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:21.395 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:21.395 08:53:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:21.395 08:53:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:21.395 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:21.395 08:53:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:21.395 08:53:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.395 08:53:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.395 08:53:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:21.395 08:53:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.395 08:53:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:21.395 Found net devices under 0000:af:00.0: cvl_0_0 00:19:21.395 08:53:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.395 08:53:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:21.395 08:53:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.395 08:53:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:21.395 08:53:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.395 08:53:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:21.395 Found net devices under 0000:af:00.1: cvl_0_1 00:19:21.395 08:53:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.395 08:53:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:21.395 08:53:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:21.395 08:53:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:21.395 08:53:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.395 08:53:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.395 08:53:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.395 08:53:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:21.395 08:53:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.395 08:53:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.395 08:53:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:21.395 08:53:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.395 08:53:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.395 08:53:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:21.395 08:53:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:21.395 08:53:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.395 08:53:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.395 08:53:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.395 08:53:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.395 08:53:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:21.395 08:53:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.395 08:53:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.395 08:53:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.395 08:53:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:21.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:19:21.395 00:19:21.395 --- 10.0.0.2 ping statistics --- 00:19:21.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.395 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:19:21.395 08:53:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:19:21.395 00:19:21.395 --- 10.0.0.1 ping statistics --- 00:19:21.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.395 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:19:21.395 08:53:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.395 08:53:38 -- nvmf/common.sh@411 -- # return 0 00:19:21.395 08:53:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:21.395 08:53:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.395 08:53:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:21.395 08:53:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.395 08:53:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:21.395 08:53:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:21.655 08:53:38 -- target/multipath.sh@45 -- # '[' -z ']' 00:19:21.655 08:53:38 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:21.655 only one NIC for nvmf test 00:19:21.655 08:53:38 -- target/multipath.sh@47 -- # nvmftestfini 00:19:21.655 08:53:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:21.655 08:53:38 -- nvmf/common.sh@117 -- # sync 00:19:21.655 08:53:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:21.655 08:53:38 -- nvmf/common.sh@120 -- # set +e 00:19:21.655 08:53:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:21.655 08:53:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:21.655 rmmod nvme_tcp 00:19:21.655 rmmod nvme_fabrics 00:19:21.655 rmmod nvme_keyring 00:19:21.655 08:53:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.655 08:53:38 -- nvmf/common.sh@124 -- # set -e 00:19:21.655 08:53:38 -- nvmf/common.sh@125 -- # return 0 00:19:21.655 08:53:38 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:21.655 08:53:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:21.655 08:53:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:21.655 08:53:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:21.655 08:53:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.655 08:53:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:21.655 08:53:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.655 08:53:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.655 08:53:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.565 08:53:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:23.565 08:53:40 -- target/multipath.sh@48 -- # exit 0 00:19:23.565 08:53:40 -- target/multipath.sh@1 -- # nvmftestfini 00:19:23.565 08:53:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:23.565 08:53:40 -- nvmf/common.sh@117 -- # sync 00:19:23.565 08:53:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:23.565 08:53:40 -- nvmf/common.sh@120 -- # set +e 00:19:23.565 08:53:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:23.565 08:53:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:23.565 08:53:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:23.565 08:53:40 -- nvmf/common.sh@124 -- # set -e 00:19:23.565 08:53:40 -- nvmf/common.sh@125 -- # return 0 00:19:23.565 08:53:40 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:23.565 08:53:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:23.565 08:53:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:23.565 08:53:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:23.565 08:53:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.565 08:53:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:23.565 08:53:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.565 08:53:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.565 08:53:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.825 08:53:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:23.825 00:19:23.825 real 0m9.117s 00:19:23.825 user 0m1.920s 00:19:23.825 sys 0m5.224s 00:19:23.825 08:53:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:23.825 08:53:40 -- common/autotest_common.sh@10 -- # set +x 00:19:23.825 ************************************ 00:19:23.825 END TEST nvmf_multipath 00:19:23.825 ************************************ 00:19:23.825 08:53:40 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:23.825 08:53:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:23.825 08:53:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:23.825 08:53:40 -- common/autotest_common.sh@10 -- # set +x 00:19:23.825 ************************************ 00:19:23.825 START TEST nvmf_zcopy 00:19:23.825 ************************************ 00:19:23.825 08:53:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:24.084 * Looking for test storage... 00:19:24.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.084 08:53:41 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.084 08:53:41 -- nvmf/common.sh@7 -- # uname -s 00:19:24.084 08:53:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.084 08:53:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.084 08:53:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.084 08:53:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.084 08:53:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.084 08:53:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.084 08:53:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.084 08:53:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.084 08:53:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.084 08:53:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.084 08:53:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:24.084 08:53:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:24.084 08:53:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.084 08:53:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.084 08:53:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.084 08:53:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.084 08:53:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.084 08:53:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.084 08:53:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.084 08:53:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.084 08:53:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.084 08:53:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.084 08:53:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.085 08:53:41 -- paths/export.sh@5 -- # export PATH 00:19:24.085 08:53:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.085 08:53:41 -- nvmf/common.sh@47 -- # : 0 00:19:24.085 08:53:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:24.085 08:53:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:24.085 08:53:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.085 08:53:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.085 08:53:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.085 08:53:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:24.085 08:53:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:24.085 08:53:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:24.085 08:53:41 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:24.085 08:53:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:24.085 08:53:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.085 08:53:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:24.085 08:53:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:24.085 08:53:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:24.085 08:53:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.085 08:53:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.085 08:53:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.085 08:53:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:24.085 08:53:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:24.085 08:53:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:24.085 08:53:41 -- common/autotest_common.sh@10 -- # set +x 00:19:30.656 08:53:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:30.656 08:53:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:30.656 08:53:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:30.656 08:53:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:30.656 08:53:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:30.656 08:53:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:30.656 08:53:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:30.656 08:53:47 -- nvmf/common.sh@295 -- # net_devs=() 00:19:30.656 08:53:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:30.656 08:53:47 -- nvmf/common.sh@296 -- # e810=() 00:19:30.656 08:53:47 -- nvmf/common.sh@296 -- # local -ga e810 00:19:30.656 08:53:47 -- nvmf/common.sh@297 -- # x722=() 00:19:30.656 08:53:47 -- nvmf/common.sh@297 -- # local -ga x722 00:19:30.656 08:53:47 -- nvmf/common.sh@298 -- # mlx=() 00:19:30.656 08:53:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:30.656 08:53:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.656 08:53:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.657 08:53:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.657 08:53:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.657 08:53:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.657 08:53:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.657 08:53:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.657 08:53:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.657 08:53:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.657 08:53:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.657 08:53:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.657 08:53:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:30.657 08:53:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:30.657 08:53:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:30.657 08:53:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.657 08:53:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:30.657 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:30.657 08:53:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.657 08:53:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:30.657 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:30.657 08:53:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:30.657 08:53:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.657 08:53:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.657 08:53:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:30.657 08:53:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.657 08:53:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:30.657 Found net devices under 0000:af:00.0: cvl_0_0 00:19:30.657 08:53:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.657 08:53:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.657 08:53:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.657 08:53:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:30.657 08:53:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.657 08:53:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:30.657 Found net devices under 0000:af:00.1: cvl_0_1 00:19:30.657 08:53:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.657 08:53:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:30.657 08:53:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:30.657 08:53:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:30.657 08:53:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.657 08:53:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.657 08:53:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.657 08:53:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:30.657 08:53:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.657 08:53:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.657 08:53:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:30.657 08:53:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.657 08:53:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.657 08:53:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:30.657 08:53:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:30.657 08:53:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.657 08:53:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:30.657 08:53:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:30.657 08:53:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:30.657 08:53:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:30.657 08:53:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:30.657 08:53:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:30.657 08:53:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:30.657 08:53:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:30.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:30.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:19:30.657 00:19:30.657 --- 10.0.0.2 ping statistics --- 00:19:30.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.657 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:19:30.657 08:53:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:30.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:30.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:19:30.657 00:19:30.657 --- 10.0.0.1 ping statistics --- 00:19:30.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:30.657 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:19:30.657 08:53:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:30.657 08:53:47 -- nvmf/common.sh@411 -- # return 0 00:19:30.657 08:53:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:30.657 08:53:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:30.657 08:53:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:30.657 08:53:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:30.657 08:53:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:30.657 08:53:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:30.657 08:53:47 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:30.657 08:53:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:30.657 08:53:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:30.657 08:53:47 -- common/autotest_common.sh@10 -- # set +x 00:19:30.657 08:53:47 -- nvmf/common.sh@470 -- # nvmfpid=2074344 00:19:30.657 08:53:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:30.657 08:53:47 -- nvmf/common.sh@471 -- # waitforlisten 2074344 00:19:30.657 08:53:47 -- common/autotest_common.sh@817 -- # '[' -z 2074344 ']' 00:19:30.657 08:53:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.657 08:53:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:30.657 08:53:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.657 08:53:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:30.657 08:53:47 -- common/autotest_common.sh@10 -- # set +x 00:19:30.657 [2024-04-26 08:53:47.817135] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:19:30.657 [2024-04-26 08:53:47.817186] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.657 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.657 [2024-04-26 08:53:47.891850] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.916 [2024-04-26 08:53:47.964068] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.916 [2024-04-26 08:53:47.964102] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.916 [2024-04-26 08:53:47.964111] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.916 [2024-04-26 08:53:47.964120] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.916 [2024-04-26 08:53:47.964127] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.916 [2024-04-26 08:53:47.964150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.483 08:53:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:31.483 08:53:48 -- common/autotest_common.sh@850 -- # return 0 00:19:31.483 08:53:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:31.483 08:53:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:31.483 08:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:31.483 08:53:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.483 08:53:48 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:31.483 08:53:48 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:31.483 08:53:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.483 08:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:31.483 [2024-04-26 08:53:48.662603] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.483 08:53:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.483 08:53:48 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:31.483 08:53:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.483 08:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:31.483 08:53:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.483 08:53:48 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.483 08:53:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.483 08:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:31.483 [2024-04-26 08:53:48.682772] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.483 08:53:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.483 08:53:48 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:31.483 08:53:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.483 08:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:31.483 08:53:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.483 08:53:48 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:31.483 08:53:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.483 08:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:31.483 malloc0 00:19:31.483 08:53:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.483 08:53:48 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:31.483 08:53:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:31.483 08:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:31.483 08:53:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:31.483 08:53:48 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:31.483 08:53:48 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:31.483 08:53:48 -- nvmf/common.sh@521 -- # config=() 00:19:31.483 08:53:48 -- nvmf/common.sh@521 -- # local subsystem config 00:19:31.483 08:53:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:31.483 08:53:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:31.483 { 00:19:31.483 "params": { 00:19:31.483 "name": "Nvme$subsystem", 00:19:31.483 "trtype": "$TEST_TRANSPORT", 00:19:31.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:31.483 "adrfam": "ipv4", 00:19:31.483 "trsvcid": "$NVMF_PORT", 00:19:31.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:31.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:31.483 "hdgst": ${hdgst:-false}, 00:19:31.483 "ddgst": ${ddgst:-false} 00:19:31.483 }, 00:19:31.483 "method": "bdev_nvme_attach_controller" 00:19:31.483 } 00:19:31.483 EOF 00:19:31.483 )") 00:19:31.742 08:53:48 -- nvmf/common.sh@543 -- # cat 00:19:31.742 08:53:48 -- nvmf/common.sh@545 -- # jq . 00:19:31.742 08:53:48 -- nvmf/common.sh@546 -- # IFS=, 00:19:31.742 08:53:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:31.742 "params": { 00:19:31.742 "name": "Nvme1", 00:19:31.742 "trtype": "tcp", 00:19:31.742 "traddr": "10.0.0.2", 00:19:31.742 "adrfam": "ipv4", 00:19:31.742 "trsvcid": "4420", 00:19:31.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.742 "hdgst": false, 00:19:31.742 "ddgst": false 00:19:31.742 }, 00:19:31.742 "method": "bdev_nvme_attach_controller" 00:19:31.742 }' 00:19:31.742 [2024-04-26 08:53:48.772750] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:19:31.742 [2024-04-26 08:53:48.772796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074625 ] 00:19:31.742 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.742 [2024-04-26 08:53:48.842038] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.742 [2024-04-26 08:53:48.911736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.999 Running I/O for 10 seconds... 00:19:42.003 00:19:42.003 Latency(us) 00:19:42.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.003 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:42.003 Verification LBA range: start 0x0 length 0x1000 00:19:42.003 Nvme1n1 : 10.01 8014.02 62.61 0.00 0.00 15932.65 1343.49 48653.93 00:19:42.003 =================================================================================================================== 00:19:42.003 Total : 8014.02 62.61 0.00 0.00 15932.65 1343.49 48653.93 00:19:42.261 08:53:59 -- target/zcopy.sh@39 -- # perfpid=2076362 00:19:42.261 08:53:59 -- target/zcopy.sh@41 -- # xtrace_disable 00:19:42.261 08:53:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.261 08:53:59 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:42.261 08:53:59 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:42.261 08:53:59 -- nvmf/common.sh@521 -- # config=() 00:19:42.261 08:53:59 -- nvmf/common.sh@521 -- # local subsystem config 00:19:42.261 08:53:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:42.261 08:53:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:42.261 { 00:19:42.261 "params": { 00:19:42.261 "name": "Nvme$subsystem", 00:19:42.261 "trtype": "$TEST_TRANSPORT", 00:19:42.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.261 "adrfam": "ipv4", 00:19:42.261 "trsvcid": "$NVMF_PORT", 00:19:42.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.262 "hdgst": ${hdgst:-false}, 00:19:42.262 "ddgst": ${ddgst:-false} 00:19:42.262 }, 00:19:42.262 "method": "bdev_nvme_attach_controller" 00:19:42.262 } 00:19:42.262 EOF 00:19:42.262 )") 00:19:42.262 08:53:59 -- nvmf/common.sh@543 -- # cat 00:19:42.262 [2024-04-26 08:53:59.352355] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.352387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 08:53:59 -- nvmf/common.sh@545 -- # jq . 00:19:42.262 08:53:59 -- nvmf/common.sh@546 -- # IFS=, 00:19:42.262 08:53:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:42.262 "params": { 00:19:42.262 "name": "Nvme1", 00:19:42.262 "trtype": "tcp", 00:19:42.262 "traddr": "10.0.0.2", 00:19:42.262 "adrfam": "ipv4", 00:19:42.262 "trsvcid": "4420", 00:19:42.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.262 "hdgst": false, 00:19:42.262 "ddgst": false 00:19:42.262 }, 00:19:42.262 "method": "bdev_nvme_attach_controller" 00:19:42.262 }' 00:19:42.262 [2024-04-26 08:53:59.364354] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.364369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 [2024-04-26 08:53:59.376381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.376393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 [2024-04-26 08:53:59.388410] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.388421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 [2024-04-26 08:53:59.389558] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:19:42.262 [2024-04-26 08:53:59.389605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2076362 ] 00:19:42.262 [2024-04-26 08:53:59.400444] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.400465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 [2024-04-26 08:53:59.412482] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.412495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.262 [2024-04-26 08:53:59.424509] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.424527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 [2024-04-26 08:53:59.436543] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.436554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 [2024-04-26 08:53:59.448575] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.448588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 [2024-04-26 08:53:59.459704] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.262 [2024-04-26 08:53:59.460607] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.460619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 [2024-04-26 08:53:59.472641] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.472654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 [2024-04-26 08:53:59.484670] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.484682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.262 [2024-04-26 08:53:59.496709] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.262 [2024-04-26 08:53:59.496731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.520 [2024-04-26 08:53:59.508751] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.520 [2024-04-26 08:53:59.508775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.520 [2024-04-26 08:53:59.520775] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.520 [2024-04-26 08:53:59.520792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.520 [2024-04-26 08:53:59.529759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.520 [2024-04-26 08:53:59.532801] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.520 [2024-04-26 08:53:59.532814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.544842] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.544863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.556870] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.556884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.568899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.568912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.580926] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.580940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.592959] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.592972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.604986] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.604999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.617033] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.617053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.629059] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.629074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.641091] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.641107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.653125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.653140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.665158] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.665172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.677194] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.677211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 Running I/O for 5 seconds... 00:19:42.521 [2024-04-26 08:53:59.689223] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.689235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.708090] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.708112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.719325] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.719346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.734568] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.734589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.748204] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.748225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.521 [2024-04-26 08:53:59.761710] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.521 [2024-04-26 08:53:59.761730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.776385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.776406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.791599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.791619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.804363] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.804383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.818556] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.818576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.832427] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.832447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.846346] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.846366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.858296] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.858315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.874217] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.874237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.888266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.888286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.902116] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.902136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.916242] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.916261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.928749] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.928769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.942654] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.942674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.955918] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.955938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.969850] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.969870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.983550] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.983570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:53:59.997194] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:53:59.997214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:54:00.011235] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:54:00.011257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:42.781 [2024-04-26 08:54:00.026386] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:42.781 [2024-04-26 08:54:00.026407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.040 [2024-04-26 08:54:00.041531] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.040 [2024-04-26 08:54:00.041552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.040 [2024-04-26 08:54:00.058013] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.040 [2024-04-26 08:54:00.058035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.040 [2024-04-26 08:54:00.072224] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.040 [2024-04-26 08:54:00.072246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.040 [2024-04-26 08:54:00.086063] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.040 [2024-04-26 08:54:00.086084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.040 [2024-04-26 08:54:00.093246] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.040 [2024-04-26 08:54:00.093266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.040 [2024-04-26 08:54:00.106181] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.040 [2024-04-26 08:54:00.106202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.040 [2024-04-26 08:54:00.120052] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.120072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.132768] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.132788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.145885] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.145904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.159832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.159852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.173696] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.173715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.187380] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.187400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.199128] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.199148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.215080] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.215099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.229722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.229742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.243376] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.243397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.256939] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.256959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.271011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.271030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.041 [2024-04-26 08:54:00.286658] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.041 [2024-04-26 08:54:00.286679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.300425] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.300446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.316678] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.316698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.330680] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.330700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.344175] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.344195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.357957] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.357977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.369764] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.369784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.383931] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.383950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.398160] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.398180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.412573] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.412594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.426409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.426429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.441393] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.441412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.455912] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.455932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.470457] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.470476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.486196] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.486220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.500275] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.500295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.513945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.513965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.527839] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.527859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.300 [2024-04-26 08:54:00.539658] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.300 [2024-04-26 08:54:00.539677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.554046] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.554068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.569320] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.569340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.582783] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.582803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.596722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.596742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.607899] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.607926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.621962] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.621986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.635583] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.635604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.649200] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.649221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.662892] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.662911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.676118] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.676138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.689630] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.689650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.702119] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.702139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.715893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.715913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.729394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.729414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.743599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.743623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.758161] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.758184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.772588] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.772609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.784334] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.784355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.559 [2024-04-26 08:54:00.798336] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.559 [2024-04-26 08:54:00.798356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.818 [2024-04-26 08:54:00.812613] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.818 [2024-04-26 08:54:00.812635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.818 [2024-04-26 08:54:00.823814] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.818 [2024-04-26 08:54:00.823835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.818 [2024-04-26 08:54:00.838379] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.818 [2024-04-26 08:54:00.838399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.818 [2024-04-26 08:54:00.853761] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.818 [2024-04-26 08:54:00.853781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.818 [2024-04-26 08:54:00.868353] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.818 [2024-04-26 08:54:00.868374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.818 [2024-04-26 08:54:00.879479] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.818 [2024-04-26 08:54:00.879499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.818 [2024-04-26 08:54:00.893621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.818 [2024-04-26 08:54:00.893641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.818 [2024-04-26 08:54:00.907891] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.818 [2024-04-26 08:54:00.907911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.818 [2024-04-26 08:54:00.921649] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.818 [2024-04-26 08:54:00.921670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.818 [2024-04-26 08:54:00.935473] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.818 [2024-04-26 08:54:00.935494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.818 [2024-04-26 08:54:00.949079] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.818 [2024-04-26 08:54:00.949099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.819 [2024-04-26 08:54:00.962541] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.819 [2024-04-26 08:54:00.962562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.819 [2024-04-26 08:54:00.976186] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.819 [2024-04-26 08:54:00.976206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.819 [2024-04-26 08:54:00.989864] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.819 [2024-04-26 08:54:00.989885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.819 [2024-04-26 08:54:01.003769] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.819 [2024-04-26 08:54:01.003793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.819 [2024-04-26 08:54:01.015245] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.819 [2024-04-26 08:54:01.015266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.819 [2024-04-26 08:54:01.029077] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.819 [2024-04-26 08:54:01.029097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.819 [2024-04-26 08:54:01.042890] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.819 [2024-04-26 08:54:01.042909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:43.819 [2024-04-26 08:54:01.056968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:43.819 [2024-04-26 08:54:01.056989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.071030] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.071053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.082270] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.082290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.096761] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.096780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.111797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.111817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.126173] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.126193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.140240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.140260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.154072] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.154092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.168089] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.168109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.181125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.181144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.195402] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.195421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.203907] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.203927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.219169] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.219189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.232559] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.232581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.249513] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.249533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.261442] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.261475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.275739] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.275759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.290730] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.290749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.305787] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.305806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.078 [2024-04-26 08:54:01.321125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.078 [2024-04-26 08:54:01.321146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.336957] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.336977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.352321] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.352342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.368262] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.368282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.382354] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.382375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.395660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.395680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.409893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.409914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.421310] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.421330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.434997] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.435018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.448375] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.448395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.462343] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.462363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.476418] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.476439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.487824] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.487844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.502014] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.502034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.337 [2024-04-26 08:54:01.515618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.337 [2024-04-26 08:54:01.515638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.338 [2024-04-26 08:54:01.529127] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.338 [2024-04-26 08:54:01.529147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.338 [2024-04-26 08:54:01.542576] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.338 [2024-04-26 08:54:01.542596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.338 [2024-04-26 08:54:01.556241] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.338 [2024-04-26 08:54:01.556261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.338 [2024-04-26 08:54:01.570481] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.338 [2024-04-26 08:54:01.570500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.586542] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.586564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.600420] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.600442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.614263] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.614284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.628540] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.628559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.644171] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.644191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.658203] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.658223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.669690] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.669711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.683517] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.683537] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.697248] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.697268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.710758] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.710778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.724386] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.724406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.737915] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.737935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.751825] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.751845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.765306] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.765326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.779056] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.779076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.792893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.792913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.804298] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.804318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.818057] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.818077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.596 [2024-04-26 08:54:01.832434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.596 [2024-04-26 08:54:01.832461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.847604] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.847626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.861303] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.861323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.874898] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.874918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.888784] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.888805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.902503] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.902523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.916324] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.916346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.930126] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.930145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.944082] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.944102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.956534] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.956554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.970184] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.970204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.983557] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.983577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:01.997386] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:01.997406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:02.011429] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:02.011456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:02.022807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:02.022827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:02.037009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:02.037028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:02.050907] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:02.050926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:02.066046] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:02.066065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:02.081539] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:02.081559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.855 [2024-04-26 08:54:02.095355] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:44.855 [2024-04-26 08:54:02.095375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.109368] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.109390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.120806] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.120826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.135013] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.135034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.151268] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.151288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.163944] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.163965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.178927] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.178948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.190011] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.190031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.203621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.203641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.217529] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.217550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.231151] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.231172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.244761] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.244781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.259005] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.259025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.270352] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.270371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.284743] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.114 [2024-04-26 08:54:02.284764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.114 [2024-04-26 08:54:02.298356] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.115 [2024-04-26 08:54:02.298375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.115 [2024-04-26 08:54:02.313404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.115 [2024-04-26 08:54:02.313424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.115 [2024-04-26 08:54:02.327389] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.115 [2024-04-26 08:54:02.327409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.115 [2024-04-26 08:54:02.337374] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.115 [2024-04-26 08:54:02.337393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.115 [2024-04-26 08:54:02.352664] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.115 [2024-04-26 08:54:02.352685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.367489] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.367510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.382529] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.382549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.396639] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.396659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.408947] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.408968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.423063] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.423083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.437041] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.437061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.452507] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.452527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.467297] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.467317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.482210] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.482230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.496489] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.496510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.511994] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.512014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.526205] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.526225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.539947] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.539967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.553827] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.553847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.564913] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.373 [2024-04-26 08:54:02.564937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.373 [2024-04-26 08:54:02.579232] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.374 [2024-04-26 08:54:02.579253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.374 [2024-04-26 08:54:02.593595] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.374 [2024-04-26 08:54:02.593615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.374 [2024-04-26 08:54:02.608891] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.374 [2024-04-26 08:54:02.608911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.623307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.623328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.638982] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.639003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.653041] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.653061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.666908] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.666927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.680988] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.681007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.696521] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.696541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.710284] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.710305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.724490] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.724510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.737901] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.737921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.752258] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.752279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.763217] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.763237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.777317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.777337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.791350] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.791369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.803680] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.803700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.818117] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.818137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.829480] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.829503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.843892] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.843912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.856089] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.856109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.632 [2024-04-26 08:54:02.869701] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.632 [2024-04-26 08:54:02.869720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:02.883872] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:02.883895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:02.895266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:02.895288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:02.909199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:02.909219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:02.923079] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:02.923098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:02.936919] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:02.936939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:02.948313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:02.948333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:02.962319] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:02.962338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:02.976100] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:02.976120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:02.990313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:02.990333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:03.001367] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:03.001386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:03.015416] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:03.015436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:03.028936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:03.028955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:03.042470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:03.042490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:03.056419] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:03.056439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:03.070333] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:03.070352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:03.084415] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:03.084439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:03.095862] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:03.095882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:03.109989] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:03.110009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:45.890 [2024-04-26 08:54:03.124639] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:45.890 [2024-04-26 08:54:03.124659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.139291] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.139312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.153239] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.153260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.166421] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.166442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.180487] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.180507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.192190] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.192210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.206014] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.206035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.218876] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.218896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.232939] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.232958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.247016] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.247036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.258274] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.258294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.272481] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.272501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.286427] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.286447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.300377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.300397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.314549] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.314569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.326133] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.326154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.340392] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.340416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.354180] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.354200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.367755] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.367775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.381194] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.381214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.149 [2024-04-26 08:54:03.394754] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.149 [2024-04-26 08:54:03.394775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.408677] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.408698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.422359] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.422380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.435983] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.436003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.450482] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.450501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.465721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.465740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.479940] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.479961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.491210] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.491230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.505732] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.505751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.521292] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.521312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.536302] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.536322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.551378] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.551398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.565551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.565571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.579089] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.579108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.592998] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.593018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.606558] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.606577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.620786] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.620806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.629215] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.629233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.408 [2024-04-26 08:54:03.644281] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.408 [2024-04-26 08:54:03.644301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.658441] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.658471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.672117] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.672137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.686109] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.686128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.702035] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.702054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.716987] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.717006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.731635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.731655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.745440] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.745465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.759997] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.760017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.774791] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.774811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.787351] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.787372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.798329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.798351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.812571] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.812592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.827573] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.827593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.842576] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.842598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.857371] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.857391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.872553] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.872574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.885438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.885465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.899344] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.899365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.667 [2024-04-26 08:54:03.913264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.667 [2024-04-26 08:54:03.913302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.926 [2024-04-26 08:54:03.925921] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:03.925942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:03.940730] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:03.940750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:03.954705] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:03.954725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:03.968401] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:03.968421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:03.981966] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:03.981986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:03.996028] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:03.996047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.009700] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.009721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.023606] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.023626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.037030] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.037050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.052165] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.052184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.067434] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.067461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.082269] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.082289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.097050] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.097071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.110986] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.111006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.124770] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.124789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.138418] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.138438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.152049] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.152070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:46.927 [2024-04-26 08:54:04.165474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:46.927 [2024-04-26 08:54:04.165495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.185 [2024-04-26 08:54:04.179492] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.185 [2024-04-26 08:54:04.179514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.185 [2024-04-26 08:54:04.193578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.185 [2024-04-26 08:54:04.193600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.185 [2024-04-26 08:54:04.204961] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.185 [2024-04-26 08:54:04.204982] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.185 [2024-04-26 08:54:04.218888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.185 [2024-04-26 08:54:04.218908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.185 [2024-04-26 08:54:04.232388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.185 [2024-04-26 08:54:04.232409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.185 [2024-04-26 08:54:04.246010] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.185 [2024-04-26 08:54:04.246030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.185 [2024-04-26 08:54:04.259242] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.185 [2024-04-26 08:54:04.259262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.185 [2024-04-26 08:54:04.273009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.185 [2024-04-26 08:54:04.273029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.185 [2024-04-26 08:54:04.286755] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.185 [2024-04-26 08:54:04.286775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.185 [2024-04-26 08:54:04.299425] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.185 [2024-04-26 08:54:04.299446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.185 [2024-04-26 08:54:04.314319] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.185 [2024-04-26 08:54:04.314339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.186 [2024-04-26 08:54:04.330202] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.186 [2024-04-26 08:54:04.330222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.186 [2024-04-26 08:54:04.343856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.186 [2024-04-26 08:54:04.343876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.186 [2024-04-26 08:54:04.357729] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.186 [2024-04-26 08:54:04.357749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.186 [2024-04-26 08:54:04.370316] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.186 [2024-04-26 08:54:04.370335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.186 [2024-04-26 08:54:04.384299] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.186 [2024-04-26 08:54:04.384320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.186 [2024-04-26 08:54:04.398164] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.186 [2024-04-26 08:54:04.398185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.186 [2024-04-26 08:54:04.410125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.186 [2024-04-26 08:54:04.410145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.186 [2024-04-26 08:54:04.424348] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.186 [2024-04-26 08:54:04.424367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.444 [2024-04-26 08:54:04.438845] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.444 [2024-04-26 08:54:04.438867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.444 [2024-04-26 08:54:04.452615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.444 [2024-04-26 08:54:04.452635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.444 [2024-04-26 08:54:04.466113] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.466133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.480348] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.480368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.491732] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.491752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.506634] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.506655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.520710] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.520730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.536042] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.536063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.550482] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.550502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.561739] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.561759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.576709] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.576729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.592289] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.592310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.606293] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.606313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.620376] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.620395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.631880] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.631900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.646098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.646122] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.659967] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.659988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.672070] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.672090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.445 [2024-04-26 08:54:04.685866] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.445 [2024-04-26 08:54:04.685885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.699563] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.699596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 00:19:47.708 Latency(us) 00:19:47.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.708 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:47.708 Nvme1n1 : 5.01 16835.29 131.53 0.00 0.00 7596.27 2437.94 34183.58 00:19:47.708 =================================================================================================================== 00:19:47.708 Total : 16835.29 131.53 0.00 0.00 7596.27 2437.94 34183.58 00:19:47.708 [2024-04-26 08:54:04.708516] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.708536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.720544] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.720560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.732589] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.732609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.744616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.744635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.756643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.756656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.768671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.768685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.780703] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.780717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.792734] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.792748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.804765] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.804778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.816794] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.816805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.828827] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.828839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.840861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.840880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.852891] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.852903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.864923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.864934] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.876957] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.876968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.888990] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.889002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 [2024-04-26 08:54:04.901022] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:47.708 [2024-04-26 08:54:04.901037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:47.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2076362) - No such process 00:19:47.708 08:54:04 -- target/zcopy.sh@49 -- # wait 2076362 00:19:47.708 08:54:04 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:47.708 08:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.708 08:54:04 -- common/autotest_common.sh@10 -- # set +x 00:19:47.708 08:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.708 08:54:04 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:47.708 08:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.708 08:54:04 -- common/autotest_common.sh@10 -- # set +x 00:19:47.708 delay0 00:19:47.708 08:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.708 08:54:04 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:47.708 08:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.708 08:54:04 -- common/autotest_common.sh@10 -- # set +x 00:19:47.708 08:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.708 08:54:04 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:47.975 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.975 [2024-04-26 08:54:05.081602] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:54.560 Initializing NVMe Controllers 00:19:54.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:54.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:54.560 Initialization complete. Launching workers. 00:19:54.560 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 95 00:19:54.560 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 384, failed to submit 31 00:19:54.560 success 150, unsuccess 234, failed 0 00:19:54.560 08:54:11 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:54.560 08:54:11 -- target/zcopy.sh@60 -- # nvmftestfini 00:19:54.560 08:54:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:54.560 08:54:11 -- nvmf/common.sh@117 -- # sync 00:19:54.560 08:54:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:54.560 08:54:11 -- nvmf/common.sh@120 -- # set +e 00:19:54.560 08:54:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:54.560 08:54:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:54.560 rmmod nvme_tcp 00:19:54.560 rmmod nvme_fabrics 00:19:54.560 rmmod nvme_keyring 00:19:54.560 08:54:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:54.560 08:54:11 -- nvmf/common.sh@124 -- # set -e 00:19:54.560 08:54:11 -- nvmf/common.sh@125 -- # return 0 00:19:54.560 08:54:11 -- nvmf/common.sh@478 -- # '[' -n 2074344 ']' 00:19:54.560 08:54:11 -- nvmf/common.sh@479 -- # killprocess 2074344 00:19:54.560 08:54:11 -- common/autotest_common.sh@936 -- # '[' -z 2074344 ']' 00:19:54.560 08:54:11 -- common/autotest_common.sh@940 -- # kill -0 2074344 00:19:54.560 08:54:11 -- common/autotest_common.sh@941 -- # uname 00:19:54.560 08:54:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:54.560 08:54:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2074344 00:19:54.560 08:54:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:54.560 08:54:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:54.560 08:54:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2074344' 00:19:54.560 killing process with pid 2074344 00:19:54.560 08:54:11 -- common/autotest_common.sh@955 -- # kill 2074344 00:19:54.560 08:54:11 -- common/autotest_common.sh@960 -- # wait 2074344 00:19:54.560 08:54:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:54.560 08:54:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:54.560 08:54:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:54.560 08:54:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.560 08:54:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.560 08:54:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.560 08:54:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.560 08:54:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.087 08:54:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:57.088 00:19:57.088 real 0m32.737s 00:19:57.088 user 0m41.957s 00:19:57.088 sys 0m13.355s 00:19:57.088 08:54:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:57.088 08:54:13 -- common/autotest_common.sh@10 -- # set +x 00:19:57.088 ************************************ 00:19:57.088 END TEST nvmf_zcopy 00:19:57.088 ************************************ 00:19:57.088 08:54:13 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:57.088 08:54:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:57.088 08:54:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:57.088 08:54:13 -- common/autotest_common.sh@10 -- # set +x 00:19:57.088 ************************************ 00:19:57.088 START TEST nvmf_nmic 00:19:57.088 ************************************ 00:19:57.088 08:54:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:57.088 * Looking for test storage... 00:19:57.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:57.088 08:54:14 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.088 08:54:14 -- nvmf/common.sh@7 -- # uname -s 00:19:57.088 08:54:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.088 08:54:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.088 08:54:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.088 08:54:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.088 08:54:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.088 08:54:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.088 08:54:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.088 08:54:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.088 08:54:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.088 08:54:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.088 08:54:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:57.088 08:54:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:57.088 08:54:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.088 08:54:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.088 08:54:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.088 08:54:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.088 08:54:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.088 08:54:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.088 08:54:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.088 08:54:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.088 08:54:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.088 08:54:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.088 08:54:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.088 08:54:14 -- paths/export.sh@5 -- # export PATH 00:19:57.088 08:54:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.088 08:54:14 -- nvmf/common.sh@47 -- # : 0 00:19:57.088 08:54:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:57.088 08:54:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:57.088 08:54:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.088 08:54:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.088 08:54:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.088 08:54:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:57.088 08:54:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:57.088 08:54:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:57.088 08:54:14 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:57.088 08:54:14 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:57.088 08:54:14 -- target/nmic.sh@14 -- # nvmftestinit 00:19:57.088 08:54:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:57.088 08:54:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.088 08:54:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:57.088 08:54:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:57.088 08:54:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:57.088 08:54:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.088 08:54:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.088 08:54:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.088 08:54:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:57.088 08:54:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:57.088 08:54:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:57.088 08:54:14 -- common/autotest_common.sh@10 -- # set +x 00:20:03.652 08:54:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:03.652 08:54:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.652 08:54:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.652 08:54:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.652 08:54:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.652 08:54:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.652 08:54:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.652 08:54:20 -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.652 08:54:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.652 08:54:20 -- nvmf/common.sh@296 -- # e810=() 00:20:03.652 08:54:20 -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.652 08:54:20 -- nvmf/common.sh@297 -- # x722=() 00:20:03.652 08:54:20 -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.652 08:54:20 -- nvmf/common.sh@298 -- # mlx=() 00:20:03.652 08:54:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.652 08:54:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.652 08:54:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.652 08:54:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.652 08:54:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.652 08:54:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.652 08:54:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.652 08:54:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.653 08:54:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.653 08:54:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.653 08:54:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.653 08:54:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.653 08:54:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.653 08:54:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:03.653 08:54:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.653 08:54:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.653 08:54:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:03.653 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:03.653 08:54:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.653 08:54:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:03.653 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:03.653 08:54:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.653 08:54:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.653 08:54:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.653 08:54:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:03.653 08:54:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.653 08:54:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:03.653 Found net devices under 0000:af:00.0: cvl_0_0 00:20:03.653 08:54:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.653 08:54:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.653 08:54:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.653 08:54:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:03.653 08:54:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.653 08:54:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:03.653 Found net devices under 0000:af:00.1: cvl_0_1 00:20:03.653 08:54:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.653 08:54:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:03.653 08:54:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:03.653 08:54:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:03.653 08:54:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.653 08:54:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.653 08:54:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.653 08:54:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:03.653 08:54:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.653 08:54:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.653 08:54:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:03.653 08:54:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.653 08:54:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.653 08:54:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:03.653 08:54:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:03.653 08:54:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.653 08:54:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.653 08:54:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.653 08:54:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.653 08:54:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:03.653 08:54:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.653 08:54:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.653 08:54:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.653 08:54:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:03.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:20:03.653 00:20:03.653 --- 10.0.0.2 ping statistics --- 00:20:03.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.653 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:20:03.653 08:54:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:20:03.653 00:20:03.653 --- 10.0.0.1 ping statistics --- 00:20:03.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.653 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:20:03.653 08:54:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.653 08:54:20 -- nvmf/common.sh@411 -- # return 0 00:20:03.653 08:54:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:03.653 08:54:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.653 08:54:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:03.653 08:54:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.653 08:54:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:03.653 08:54:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:03.653 08:54:20 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:03.653 08:54:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:03.653 08:54:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:03.653 08:54:20 -- common/autotest_common.sh@10 -- # set +x 00:20:03.653 08:54:20 -- nvmf/common.sh@470 -- # nvmfpid=2082605 00:20:03.653 08:54:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:03.653 08:54:20 -- nvmf/common.sh@471 -- # waitforlisten 2082605 00:20:03.653 08:54:20 -- common/autotest_common.sh@817 -- # '[' -z 2082605 ']' 00:20:03.653 08:54:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.653 08:54:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:03.653 08:54:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.653 08:54:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:03.653 08:54:20 -- common/autotest_common.sh@10 -- # set +x 00:20:03.653 [2024-04-26 08:54:20.807290] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:20:03.653 [2024-04-26 08:54:20.807338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.653 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.653 [2024-04-26 08:54:20.882456] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.912 [2024-04-26 08:54:20.951932] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.912 [2024-04-26 08:54:20.951974] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.912 [2024-04-26 08:54:20.951984] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.912 [2024-04-26 08:54:20.951992] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.912 [2024-04-26 08:54:20.952001] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.912 [2024-04-26 08:54:20.952059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.912 [2024-04-26 08:54:20.952156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.912 [2024-04-26 08:54:20.952219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.912 [2024-04-26 08:54:20.952220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.481 08:54:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.481 08:54:21 -- common/autotest_common.sh@850 -- # return 0 00:20:04.481 08:54:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:04.481 08:54:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:04.481 08:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.481 08:54:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.481 08:54:21 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.481 08:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.481 08:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.481 [2024-04-26 08:54:21.664360] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.481 08:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.481 08:54:21 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:04.481 08:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.481 08:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.481 Malloc0 00:20:04.481 08:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.481 08:54:21 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:04.481 08:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.481 08:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.481 08:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.481 08:54:21 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:04.481 08:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.481 08:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.481 08:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.481 08:54:21 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.481 08:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.481 08:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.481 [2024-04-26 08:54:21.718933] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.481 08:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.481 08:54:21 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:04.481 test case1: single bdev can't be used in multiple subsystems 00:20:04.481 08:54:21 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:04.481 08:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.481 08:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.783 08:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.783 08:54:21 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:04.783 08:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.783 08:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.783 08:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.783 08:54:21 -- target/nmic.sh@28 -- # nmic_status=0 00:20:04.784 08:54:21 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:04.784 08:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.784 08:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.784 [2024-04-26 08:54:21.746822] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:04.784 [2024-04-26 08:54:21.746843] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:04.784 [2024-04-26 08:54:21.746853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:04.784 request: 00:20:04.784 { 00:20:04.784 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:04.784 "namespace": { 00:20:04.784 "bdev_name": "Malloc0", 00:20:04.784 "no_auto_visible": false 00:20:04.784 }, 00:20:04.784 "method": "nvmf_subsystem_add_ns", 00:20:04.784 "req_id": 1 00:20:04.784 } 00:20:04.784 Got JSON-RPC error response 00:20:04.784 response: 00:20:04.784 { 00:20:04.784 "code": -32602, 00:20:04.784 "message": "Invalid parameters" 00:20:04.784 } 00:20:04.784 08:54:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:04.784 08:54:21 -- target/nmic.sh@29 -- # nmic_status=1 00:20:04.784 08:54:21 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:04.784 08:54:21 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:04.784 Adding namespace failed - expected result. 00:20:04.784 08:54:21 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:04.784 test case2: host connect to nvmf target in multiple paths 00:20:04.784 08:54:21 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:04.784 08:54:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.784 08:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:04.784 [2024-04-26 08:54:21.762975] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:04.784 08:54:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.784 08:54:21 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:06.160 08:54:23 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:07.551 08:54:24 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:07.551 08:54:24 -- common/autotest_common.sh@1184 -- # local i=0 00:20:07.551 08:54:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:07.551 08:54:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:07.551 08:54:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:09.467 08:54:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:09.467 08:54:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:09.467 08:54:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:09.467 08:54:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:09.467 08:54:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:09.467 08:54:26 -- common/autotest_common.sh@1194 -- # return 0 00:20:09.467 08:54:26 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:09.467 [global] 00:20:09.467 thread=1 00:20:09.467 invalidate=1 00:20:09.467 rw=write 00:20:09.467 time_based=1 00:20:09.467 runtime=1 00:20:09.467 ioengine=libaio 00:20:09.467 direct=1 00:20:09.467 bs=4096 00:20:09.467 iodepth=1 00:20:09.467 norandommap=0 00:20:09.467 numjobs=1 00:20:09.467 00:20:09.467 verify_dump=1 00:20:09.467 verify_backlog=512 00:20:09.467 verify_state_save=0 00:20:09.467 do_verify=1 00:20:09.467 verify=crc32c-intel 00:20:09.467 [job0] 00:20:09.467 filename=/dev/nvme0n1 00:20:09.467 Could not set queue depth (nvme0n1) 00:20:09.726 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:09.726 fio-3.35 00:20:09.726 Starting 1 thread 00:20:10.664 00:20:10.664 job0: (groupid=0, jobs=1): err= 0: pid=2083843: Fri Apr 26 08:54:27 2024 00:20:10.664 read: IOPS=24, BW=97.2KiB/s (99.5kB/s)(100KiB/1029msec) 00:20:10.664 slat (nsec): min=11599, max=27968, avg=24640.08, stdev=2844.85 00:20:10.664 clat (usec): min=1273, max=42163, avg=35368.75, stdev=15170.27 00:20:10.664 lat (usec): min=1299, max=42188, avg=35393.39, stdev=15169.66 00:20:10.664 clat percentiles (usec): 00:20:10.664 | 1.00th=[ 1270], 5.00th=[ 1287], 10.00th=[ 1319], 20.00th=[41157], 00:20:10.664 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:20:10.664 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:10.664 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:10.664 | 99.99th=[42206] 00:20:10.664 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:20:10.664 slat (nsec): min=7106, max=33983, avg=12001.53, stdev=1901.54 00:20:10.664 clat (usec): min=201, max=715, avg=266.65, stdev=94.86 00:20:10.664 lat (usec): min=221, max=748, avg=278.65, stdev=94.57 00:20:10.664 clat percentiles (usec): 00:20:10.664 | 1.00th=[ 210], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 215], 00:20:10.664 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 231], 00:20:10.664 | 70.00th=[ 243], 80.00th=[ 302], 90.00th=[ 404], 95.00th=[ 478], 00:20:10.664 | 99.00th=[ 594], 99.50th=[ 594], 99.90th=[ 717], 99.95th=[ 717], 00:20:10.664 | 99.99th=[ 717] 00:20:10.664 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:10.664 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:10.664 lat (usec) : 250=67.41%, 500=24.39%, 750=3.54% 00:20:10.664 lat (msec) : 2=0.74%, 50=3.91% 00:20:10.664 cpu : usr=0.19%, sys=0.68%, ctx=537, majf=0, minf=2 00:20:10.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.664 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:10.664 00:20:10.664 Run status group 0 (all jobs): 00:20:10.664 READ: bw=97.2KiB/s (99.5kB/s), 97.2KiB/s-97.2KiB/s (99.5kB/s-99.5kB/s), io=100KiB (102kB), run=1029-1029msec 00:20:10.664 WRITE: bw=1990KiB/s (2038kB/s), 1990KiB/s-1990KiB/s (2038kB/s-2038kB/s), io=2048KiB (2097kB), run=1029-1029msec 00:20:10.664 00:20:10.664 Disk stats (read/write): 00:20:10.664 nvme0n1: ios=69/512, merge=0/0, ticks=774/132, in_queue=906, util=93.19% 00:20:10.924 08:54:27 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:10.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:10.924 08:54:28 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:10.924 08:54:28 -- common/autotest_common.sh@1205 -- # local i=0 00:20:10.924 08:54:28 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:10.924 08:54:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:10.924 08:54:28 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:10.924 08:54:28 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:10.924 08:54:28 -- common/autotest_common.sh@1217 -- # return 0 00:20:10.924 08:54:28 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:10.924 08:54:28 -- target/nmic.sh@53 -- # nvmftestfini 00:20:10.924 08:54:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:10.924 08:54:28 -- nvmf/common.sh@117 -- # sync 00:20:10.924 08:54:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.924 08:54:28 -- nvmf/common.sh@120 -- # set +e 00:20:10.924 08:54:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.924 08:54:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.924 rmmod nvme_tcp 00:20:10.924 rmmod nvme_fabrics 00:20:11.184 rmmod nvme_keyring 00:20:11.184 08:54:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.184 08:54:28 -- nvmf/common.sh@124 -- # set -e 00:20:11.184 08:54:28 -- nvmf/common.sh@125 -- # return 0 00:20:11.184 08:54:28 -- nvmf/common.sh@478 -- # '[' -n 2082605 ']' 00:20:11.184 08:54:28 -- nvmf/common.sh@479 -- # killprocess 2082605 00:20:11.184 08:54:28 -- common/autotest_common.sh@936 -- # '[' -z 2082605 ']' 00:20:11.184 08:54:28 -- common/autotest_common.sh@940 -- # kill -0 2082605 00:20:11.184 08:54:28 -- common/autotest_common.sh@941 -- # uname 00:20:11.184 08:54:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:11.184 08:54:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2082605 00:20:11.184 08:54:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:11.184 08:54:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:11.184 08:54:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2082605' 00:20:11.184 killing process with pid 2082605 00:20:11.184 08:54:28 -- common/autotest_common.sh@955 -- # kill 2082605 00:20:11.184 08:54:28 -- common/autotest_common.sh@960 -- # wait 2082605 00:20:11.444 08:54:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:11.444 08:54:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:11.444 08:54:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:11.444 08:54:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.444 08:54:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.444 08:54:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.444 08:54:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.444 08:54:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.353 08:54:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.353 00:20:13.353 real 0m16.635s 00:20:13.353 user 0m40.029s 00:20:13.353 sys 0m6.174s 00:20:13.353 08:54:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:13.353 08:54:30 -- common/autotest_common.sh@10 -- # set +x 00:20:13.353 ************************************ 00:20:13.353 END TEST nvmf_nmic 00:20:13.353 ************************************ 00:20:13.353 08:54:30 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:13.353 08:54:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:13.353 08:54:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:13.353 08:54:30 -- common/autotest_common.sh@10 -- # set +x 00:20:13.614 ************************************ 00:20:13.614 START TEST nvmf_fio_target 00:20:13.614 ************************************ 00:20:13.614 08:54:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:13.614 * Looking for test storage... 00:20:13.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:13.614 08:54:30 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.614 08:54:30 -- nvmf/common.sh@7 -- # uname -s 00:20:13.614 08:54:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.614 08:54:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.614 08:54:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.614 08:54:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.614 08:54:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.614 08:54:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.614 08:54:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.614 08:54:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.614 08:54:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.614 08:54:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.614 08:54:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:13.614 08:54:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:13.614 08:54:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.614 08:54:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.614 08:54:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.614 08:54:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.614 08:54:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.614 08:54:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.614 08:54:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.614 08:54:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.614 08:54:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.615 08:54:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.615 08:54:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.615 08:54:30 -- paths/export.sh@5 -- # export PATH 00:20:13.615 08:54:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.615 08:54:30 -- nvmf/common.sh@47 -- # : 0 00:20:13.615 08:54:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.615 08:54:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.615 08:54:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.615 08:54:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.615 08:54:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.615 08:54:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.615 08:54:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.615 08:54:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.615 08:54:30 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:13.615 08:54:30 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:13.615 08:54:30 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:13.615 08:54:30 -- target/fio.sh@16 -- # nvmftestinit 00:20:13.615 08:54:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:13.615 08:54:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.615 08:54:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:13.615 08:54:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:13.615 08:54:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:13.615 08:54:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.615 08:54:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.615 08:54:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.615 08:54:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:13.615 08:54:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:13.615 08:54:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:13.615 08:54:30 -- common/autotest_common.sh@10 -- # set +x 00:20:20.194 08:54:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:20.194 08:54:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:20.194 08:54:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:20.194 08:54:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:20.194 08:54:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:20.194 08:54:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:20.194 08:54:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:20.194 08:54:36 -- nvmf/common.sh@295 -- # net_devs=() 00:20:20.194 08:54:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:20.194 08:54:36 -- nvmf/common.sh@296 -- # e810=() 00:20:20.194 08:54:36 -- nvmf/common.sh@296 -- # local -ga e810 00:20:20.194 08:54:36 -- nvmf/common.sh@297 -- # x722=() 00:20:20.194 08:54:36 -- nvmf/common.sh@297 -- # local -ga x722 00:20:20.194 08:54:36 -- nvmf/common.sh@298 -- # mlx=() 00:20:20.194 08:54:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:20.194 08:54:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.194 08:54:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.194 08:54:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.194 08:54:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.194 08:54:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.194 08:54:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.194 08:54:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.194 08:54:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.194 08:54:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.194 08:54:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.194 08:54:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.194 08:54:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:20.194 08:54:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:20.194 08:54:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:20.194 08:54:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.194 08:54:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:20.194 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:20.194 08:54:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.194 08:54:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:20.194 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:20.194 08:54:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:20.194 08:54:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.194 08:54:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.194 08:54:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:20.194 08:54:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.194 08:54:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:20.194 Found net devices under 0000:af:00.0: cvl_0_0 00:20:20.194 08:54:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.194 08:54:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.194 08:54:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.194 08:54:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:20.194 08:54:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.194 08:54:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:20.194 Found net devices under 0000:af:00.1: cvl_0_1 00:20:20.194 08:54:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.194 08:54:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:20.194 08:54:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:20.194 08:54:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:20.194 08:54:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:20.194 08:54:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.194 08:54:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.194 08:54:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.194 08:54:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:20.194 08:54:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.194 08:54:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.194 08:54:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:20.194 08:54:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.194 08:54:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.194 08:54:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:20.194 08:54:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:20.194 08:54:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.194 08:54:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.194 08:54:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.194 08:54:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.194 08:54:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:20.194 08:54:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.194 08:54:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.194 08:54:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.194 08:54:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:20.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:20:20.194 00:20:20.194 --- 10.0.0.2 ping statistics --- 00:20:20.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.194 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:20:20.194 08:54:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:20:20.194 00:20:20.194 --- 10.0.0.1 ping statistics --- 00:20:20.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.194 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:20:20.195 08:54:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.195 08:54:37 -- nvmf/common.sh@411 -- # return 0 00:20:20.195 08:54:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:20.195 08:54:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.195 08:54:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:20.195 08:54:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:20.195 08:54:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.195 08:54:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:20.195 08:54:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:20.195 08:54:37 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:20.195 08:54:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:20.195 08:54:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:20.195 08:54:37 -- common/autotest_common.sh@10 -- # set +x 00:20:20.195 08:54:37 -- nvmf/common.sh@470 -- # nvmfpid=2087781 00:20:20.195 08:54:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:20.195 08:54:37 -- nvmf/common.sh@471 -- # waitforlisten 2087781 00:20:20.195 08:54:37 -- common/autotest_common.sh@817 -- # '[' -z 2087781 ']' 00:20:20.195 08:54:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.195 08:54:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:20.195 08:54:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.195 08:54:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:20.195 08:54:37 -- common/autotest_common.sh@10 -- # set +x 00:20:20.195 [2024-04-26 08:54:37.327583] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:20:20.195 [2024-04-26 08:54:37.327631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.195 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.195 [2024-04-26 08:54:37.400702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.455 [2024-04-26 08:54:37.469427] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.455 [2024-04-26 08:54:37.469473] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.455 [2024-04-26 08:54:37.469483] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.455 [2024-04-26 08:54:37.469491] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.455 [2024-04-26 08:54:37.469499] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.455 [2024-04-26 08:54:37.469543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.455 [2024-04-26 08:54:37.469663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.455 [2024-04-26 08:54:37.469746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.455 [2024-04-26 08:54:37.469748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.024 08:54:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:21.024 08:54:38 -- common/autotest_common.sh@850 -- # return 0 00:20:21.024 08:54:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:21.024 08:54:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:21.024 08:54:38 -- common/autotest_common.sh@10 -- # set +x 00:20:21.024 08:54:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.024 08:54:38 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:21.283 [2024-04-26 08:54:38.322723] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.283 08:54:38 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:21.542 08:54:38 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:21.542 08:54:38 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:21.542 08:54:38 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:21.542 08:54:38 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:21.802 08:54:38 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:21.802 08:54:38 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:22.095 08:54:39 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:22.095 08:54:39 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:22.356 08:54:39 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:22.356 08:54:39 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:22.356 08:54:39 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:22.616 08:54:39 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:22.616 08:54:39 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:22.875 08:54:39 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:22.875 08:54:39 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:22.875 08:54:40 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:23.135 08:54:40 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:23.135 08:54:40 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:23.394 08:54:40 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:23.394 08:54:40 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:23.394 08:54:40 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.653 [2024-04-26 08:54:40.779384] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.653 08:54:40 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:23.912 08:54:40 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:24.171 08:54:41 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:25.549 08:54:42 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:25.549 08:54:42 -- common/autotest_common.sh@1184 -- # local i=0 00:20:25.549 08:54:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:25.549 08:54:42 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:20:25.549 08:54:42 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:20:25.549 08:54:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:27.457 08:54:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:27.457 08:54:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:27.457 08:54:44 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:27.457 08:54:44 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:20:27.457 08:54:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:27.457 08:54:44 -- common/autotest_common.sh@1194 -- # return 0 00:20:27.457 08:54:44 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:27.457 [global] 00:20:27.457 thread=1 00:20:27.457 invalidate=1 00:20:27.457 rw=write 00:20:27.457 time_based=1 00:20:27.457 runtime=1 00:20:27.457 ioengine=libaio 00:20:27.457 direct=1 00:20:27.457 bs=4096 00:20:27.457 iodepth=1 00:20:27.457 norandommap=0 00:20:27.457 numjobs=1 00:20:27.457 00:20:27.457 verify_dump=1 00:20:27.457 verify_backlog=512 00:20:27.457 verify_state_save=0 00:20:27.457 do_verify=1 00:20:27.457 verify=crc32c-intel 00:20:27.457 [job0] 00:20:27.457 filename=/dev/nvme0n1 00:20:27.457 [job1] 00:20:27.457 filename=/dev/nvme0n2 00:20:27.457 [job2] 00:20:27.457 filename=/dev/nvme0n3 00:20:27.457 [job3] 00:20:27.457 filename=/dev/nvme0n4 00:20:27.457 Could not set queue depth (nvme0n1) 00:20:27.457 Could not set queue depth (nvme0n2) 00:20:27.457 Could not set queue depth (nvme0n3) 00:20:27.457 Could not set queue depth (nvme0n4) 00:20:28.024 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:28.024 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:28.024 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:28.024 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:28.024 fio-3.35 00:20:28.024 Starting 4 threads 00:20:29.402 00:20:29.402 job0: (groupid=0, jobs=1): err= 0: pid=2089226: Fri Apr 26 08:54:46 2024 00:20:29.402 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:20:29.402 slat (nsec): min=8810, max=41681, avg=9660.21, stdev=1724.89 00:20:29.402 clat (usec): min=393, max=1847, avg=598.63, stdev=69.82 00:20:29.402 lat (usec): min=403, max=1857, avg=608.29, stdev=69.81 00:20:29.402 clat percentiles (usec): 00:20:29.402 | 1.00th=[ 478], 5.00th=[ 529], 10.00th=[ 553], 20.00th=[ 570], 00:20:29.402 | 30.00th=[ 578], 40.00th=[ 586], 50.00th=[ 594], 60.00th=[ 594], 00:20:29.402 | 70.00th=[ 603], 80.00th=[ 619], 90.00th=[ 660], 95.00th=[ 693], 00:20:29.402 | 99.00th=[ 734], 99.50th=[ 750], 99.90th=[ 1450], 99.95th=[ 1844], 00:20:29.402 | 99.99th=[ 1844] 00:20:29.402 write: IOPS=1034, BW=4140KiB/s (4239kB/s)(4144KiB/1001msec); 0 zone resets 00:20:29.402 slat (usec): min=12, max=40304, avg=67.94, stdev=1344.42 00:20:29.402 clat (usec): min=216, max=3531, avg=290.04, stdev=121.71 00:20:29.402 lat (usec): min=229, max=40970, avg=357.97, stdev=1365.45 00:20:29.402 clat percentiles (usec): 00:20:29.402 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 241], 00:20:29.402 | 30.00th=[ 251], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 281], 00:20:29.402 | 70.00th=[ 297], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 388], 00:20:29.402 | 99.00th=[ 603], 99.50th=[ 603], 99.90th=[ 709], 99.95th=[ 3523], 00:20:29.402 | 99.99th=[ 3523] 00:20:29.402 bw ( KiB/s): min= 4087, max= 4087, per=25.23%, avg=4087.00, stdev= 0.00, samples=1 00:20:29.402 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:20:29.402 lat (usec) : 250=14.81%, 500=35.78%, 750=49.13%, 1000=0.10% 00:20:29.402 lat (msec) : 2=0.15%, 4=0.05% 00:20:29.402 cpu : usr=2.30%, sys=3.40%, ctx=2064, majf=0, minf=1 00:20:29.402 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.402 issued rwts: total=1024,1036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.402 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:29.402 job1: (groupid=0, jobs=1): err= 0: pid=2089242: Fri Apr 26 08:54:46 2024 00:20:29.402 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:20:29.402 slat (nsec): min=8825, max=38798, avg=9582.52, stdev=1506.93 00:20:29.402 clat (usec): min=322, max=830, avg=496.95, stdev=35.66 00:20:29.402 lat (usec): min=332, max=839, avg=506.54, stdev=35.86 00:20:29.402 clat percentiles (usec): 00:20:29.402 | 1.00th=[ 343], 5.00th=[ 441], 10.00th=[ 486], 20.00th=[ 494], 00:20:29.402 | 30.00th=[ 498], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 506], 00:20:29.402 | 70.00th=[ 510], 80.00th=[ 510], 90.00th=[ 515], 95.00th=[ 519], 00:20:29.402 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 807], 99.95th=[ 832], 00:20:29.402 | 99.99th=[ 832] 00:20:29.402 write: IOPS=1298, BW=5195KiB/s (5319kB/s)(5200KiB/1001msec); 0 zone resets 00:20:29.402 slat (usec): min=11, max=41164, avg=56.84, stdev=1221.50 00:20:29.402 clat (usec): min=209, max=810, avg=308.27, stdev=92.87 00:20:29.402 lat (usec): min=222, max=41956, avg=365.10, stdev=1242.39 00:20:29.402 clat percentiles (usec): 00:20:29.402 | 1.00th=[ 212], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:20:29.402 | 30.00th=[ 235], 40.00th=[ 253], 50.00th=[ 277], 60.00th=[ 306], 00:20:29.402 | 70.00th=[ 367], 80.00th=[ 388], 90.00th=[ 437], 95.00th=[ 478], 00:20:29.402 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 791], 99.95th=[ 807], 00:20:29.402 | 99.99th=[ 807] 00:20:29.402 bw ( KiB/s): min= 4096, max= 4096, per=25.29%, avg=4096.00, stdev= 0.00, samples=1 00:20:29.402 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:29.402 lat (usec) : 250=21.69%, 500=51.20%, 750=26.94%, 1000=0.17% 00:20:29.402 cpu : usr=1.80%, sys=2.70%, ctx=2328, majf=0, minf=1 00:20:29.402 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.402 issued rwts: total=1024,1300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.402 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:29.402 job2: (groupid=0, jobs=1): err= 0: pid=2089264: Fri Apr 26 08:54:46 2024 00:20:29.402 read: IOPS=22, BW=90.5KiB/s (92.6kB/s)(92.0KiB/1017msec) 00:20:29.402 slat (nsec): min=10079, max=25542, avg=14141.17, stdev=5312.19 00:20:29.402 clat (usec): min=1023, max=42147, avg=36618.74, stdev=14075.46 00:20:29.402 lat (usec): min=1034, max=42158, avg=36632.88, stdev=14075.94 00:20:29.402 clat percentiles (usec): 00:20:29.403 | 1.00th=[ 1020], 5.00th=[ 1106], 10.00th=[ 1106], 20.00th=[41681], 00:20:29.403 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:20:29.403 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:29.403 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:29.403 | 99.99th=[42206] 00:20:29.403 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:20:29.403 slat (nsec): min=11628, max=40422, avg=13555.36, stdev=2233.73 00:20:29.403 clat (usec): min=223, max=887, avg=323.67, stdev=84.61 00:20:29.403 lat (usec): min=235, max=927, avg=337.22, stdev=84.96 00:20:29.403 clat percentiles (usec): 00:20:29.403 | 1.00th=[ 227], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 265], 00:20:29.403 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:20:29.403 | 70.00th=[ 334], 80.00th=[ 367], 90.00th=[ 441], 95.00th=[ 478], 00:20:29.403 | 99.00th=[ 603], 99.50th=[ 611], 99.90th=[ 889], 99.95th=[ 889], 00:20:29.403 | 99.99th=[ 889] 00:20:29.403 bw ( KiB/s): min= 4096, max= 4096, per=25.29%, avg=4096.00, stdev= 0.00, samples=1 00:20:29.403 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:29.403 lat (usec) : 250=5.23%, 500=86.36%, 750=3.93%, 1000=0.19% 00:20:29.403 lat (msec) : 2=0.56%, 50=3.74% 00:20:29.403 cpu : usr=0.30%, sys=0.69%, ctx=535, majf=0, minf=1 00:20:29.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.403 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:29.403 job3: (groupid=0, jobs=1): err= 0: pid=2089274: Fri Apr 26 08:54:46 2024 00:20:29.403 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:20:29.403 slat (nsec): min=8730, max=44818, avg=9446.88, stdev=1522.87 00:20:29.403 clat (usec): min=345, max=772, avg=566.63, stdev=49.70 00:20:29.403 lat (usec): min=355, max=781, avg=576.08, stdev=49.62 00:20:29.403 clat percentiles (usec): 00:20:29.403 | 1.00th=[ 379], 5.00th=[ 478], 10.00th=[ 523], 20.00th=[ 545], 00:20:29.403 | 30.00th=[ 553], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 578], 00:20:29.403 | 70.00th=[ 586], 80.00th=[ 603], 90.00th=[ 619], 95.00th=[ 627], 00:20:29.403 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 742], 99.95th=[ 775], 00:20:29.403 | 99.99th=[ 775] 00:20:29.403 write: IOPS=1268, BW=5075KiB/s (5197kB/s)(5080KiB/1001msec); 0 zone resets 00:20:29.403 slat (nsec): min=11675, max=45907, avg=13117.79, stdev=1999.03 00:20:29.403 clat (usec): min=211, max=1035, avg=304.63, stdev=92.01 00:20:29.403 lat (usec): min=225, max=1059, avg=317.75, stdev=92.55 00:20:29.403 clat percentiles (usec): 00:20:29.403 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 245], 00:20:29.403 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:20:29.403 | 70.00th=[ 302], 80.00th=[ 330], 90.00th=[ 433], 95.00th=[ 502], 00:20:29.403 | 99.00th=[ 611], 99.50th=[ 619], 99.90th=[ 898], 99.95th=[ 1037], 00:20:29.403 | 99.99th=[ 1037] 00:20:29.403 bw ( KiB/s): min= 4407, max= 4407, per=27.21%, avg=4407.00, stdev= 0.00, samples=1 00:20:29.403 iops : min= 1101, max= 1101, avg=1101.00, stdev= 0.00, samples=1 00:20:29.403 lat (usec) : 250=13.51%, 500=42.28%, 750=44.03%, 1000=0.13% 00:20:29.403 lat (msec) : 2=0.04% 00:20:29.403 cpu : usr=2.60%, sys=3.60%, ctx=2294, majf=0, minf=2 00:20:29.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.403 issued rwts: total=1024,1270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:29.403 00:20:29.403 Run status group 0 (all jobs): 00:20:29.403 READ: bw=11.9MiB/s (12.5MB/s), 90.5KiB/s-4092KiB/s (92.6kB/s-4190kB/s), io=12.1MiB (12.7MB), run=1001-1017msec 00:20:29.403 WRITE: bw=15.8MiB/s (16.6MB/s), 2014KiB/s-5195KiB/s (2062kB/s-5319kB/s), io=16.1MiB (16.9MB), run=1001-1017msec 00:20:29.403 00:20:29.403 Disk stats (read/write): 00:20:29.403 nvme0n1: ios=739/1024, merge=0/0, ticks=1282/284, in_queue=1566, util=87.27% 00:20:29.403 nvme0n2: ios=812/1024, merge=0/0, ticks=1266/321, in_queue=1587, util=91.13% 00:20:29.403 nvme0n3: ios=73/512, merge=0/0, ticks=719/162, in_queue=881, util=91.94% 00:20:29.403 nvme0n4: ios=866/1024, merge=0/0, ticks=598/312, in_queue=910, util=96.34% 00:20:29.403 08:54:46 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:29.403 [global] 00:20:29.403 thread=1 00:20:29.403 invalidate=1 00:20:29.403 rw=randwrite 00:20:29.403 time_based=1 00:20:29.403 runtime=1 00:20:29.403 ioengine=libaio 00:20:29.403 direct=1 00:20:29.403 bs=4096 00:20:29.403 iodepth=1 00:20:29.403 norandommap=0 00:20:29.403 numjobs=1 00:20:29.403 00:20:29.403 verify_dump=1 00:20:29.403 verify_backlog=512 00:20:29.403 verify_state_save=0 00:20:29.403 do_verify=1 00:20:29.403 verify=crc32c-intel 00:20:29.403 [job0] 00:20:29.403 filename=/dev/nvme0n1 00:20:29.403 [job1] 00:20:29.403 filename=/dev/nvme0n2 00:20:29.403 [job2] 00:20:29.403 filename=/dev/nvme0n3 00:20:29.403 [job3] 00:20:29.403 filename=/dev/nvme0n4 00:20:29.403 Could not set queue depth (nvme0n1) 00:20:29.403 Could not set queue depth (nvme0n2) 00:20:29.403 Could not set queue depth (nvme0n3) 00:20:29.403 Could not set queue depth (nvme0n4) 00:20:29.662 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:29.662 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:29.662 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:29.662 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:29.662 fio-3.35 00:20:29.662 Starting 4 threads 00:20:31.055 00:20:31.055 job0: (groupid=0, jobs=1): err= 0: pid=2089644: Fri Apr 26 08:54:47 2024 00:20:31.055 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:20:31.055 slat (nsec): min=8742, max=41817, avg=10987.13, stdev=4757.17 00:20:31.055 clat (usec): min=321, max=808, avg=499.92, stdev=75.72 00:20:31.055 lat (usec): min=330, max=833, avg=510.91, stdev=79.14 00:20:31.055 clat percentiles (usec): 00:20:31.055 | 1.00th=[ 338], 5.00th=[ 392], 10.00th=[ 424], 20.00th=[ 465], 00:20:31.055 | 30.00th=[ 474], 40.00th=[ 478], 50.00th=[ 486], 60.00th=[ 490], 00:20:31.055 | 70.00th=[ 498], 80.00th=[ 545], 90.00th=[ 644], 95.00th=[ 668], 00:20:31.055 | 99.00th=[ 693], 99.50th=[ 734], 99.90th=[ 791], 99.95th=[ 807], 00:20:31.055 | 99.99th=[ 807] 00:20:31.055 write: IOPS=1446, BW=5786KiB/s (5925kB/s)(5792KiB/1001msec); 0 zone resets 00:20:31.055 slat (nsec): min=6219, max=41047, avg=12561.99, stdev=1780.08 00:20:31.055 clat (usec): min=211, max=836, avg=312.65, stdev=97.38 00:20:31.055 lat (usec): min=224, max=842, avg=325.21, stdev=97.19 00:20:31.055 clat percentiles (usec): 00:20:31.055 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 227], 00:20:31.055 | 30.00th=[ 260], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 302], 00:20:31.055 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 433], 95.00th=[ 474], 00:20:31.055 | 99.00th=[ 725], 99.50th=[ 775], 99.90th=[ 832], 99.95th=[ 840], 00:20:31.055 | 99.99th=[ 840] 00:20:31.055 bw ( KiB/s): min= 6040, max= 6040, per=39.18%, avg=6040.00, stdev= 0.00, samples=1 00:20:31.055 iops : min= 1510, max= 1510, avg=1510.00, stdev= 0.00, samples=1 00:20:31.055 lat (usec) : 250=16.91%, 500=69.38%, 750=13.07%, 1000=0.65% 00:20:31.055 cpu : usr=1.20%, sys=3.60%, ctx=2474, majf=0, minf=1 00:20:31.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.055 issued rwts: total=1024,1448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:31.055 job1: (groupid=0, jobs=1): err= 0: pid=2089659: Fri Apr 26 08:54:47 2024 00:20:31.055 read: IOPS=1165, BW=4663KiB/s (4775kB/s)(4668KiB/1001msec) 00:20:31.055 slat (nsec): min=8504, max=34058, avg=9415.51, stdev=2075.90 00:20:31.055 clat (usec): min=312, max=1151, avg=485.22, stdev=69.33 00:20:31.055 lat (usec): min=322, max=1162, avg=494.63, stdev=69.36 00:20:31.055 clat percentiles (usec): 00:20:31.055 | 1.00th=[ 322], 5.00th=[ 351], 10.00th=[ 400], 20.00th=[ 445], 00:20:31.055 | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 478], 60.00th=[ 502], 00:20:31.055 | 70.00th=[ 537], 80.00th=[ 545], 90.00th=[ 553], 95.00th=[ 562], 00:20:31.055 | 99.00th=[ 668], 99.50th=[ 775], 99.90th=[ 807], 99.95th=[ 1156], 00:20:31.055 | 99.99th=[ 1156] 00:20:31.055 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:20:31.055 slat (nsec): min=10794, max=42423, avg=12066.96, stdev=1905.47 00:20:31.055 clat (usec): min=199, max=815, avg=259.59, stdev=70.38 00:20:31.055 lat (usec): min=219, max=858, avg=271.66, stdev=70.74 00:20:31.055 clat percentiles (usec): 00:20:31.055 | 1.00th=[ 210], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 219], 00:20:31.055 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:20:31.055 | 70.00th=[ 262], 80.00th=[ 285], 90.00th=[ 347], 95.00th=[ 383], 00:20:31.055 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 791], 99.95th=[ 816], 00:20:31.055 | 99.99th=[ 816] 00:20:31.055 bw ( KiB/s): min= 5792, max= 5792, per=37.57%, avg=5792.00, stdev= 0.00, samples=1 00:20:31.055 iops : min= 1448, max= 1448, avg=1448.00, stdev= 0.00, samples=1 00:20:31.055 lat (usec) : 250=36.96%, 500=44.77%, 750=17.94%, 1000=0.30% 00:20:31.055 lat (msec) : 2=0.04% 00:20:31.055 cpu : usr=1.80%, sys=3.00%, ctx=2703, majf=0, minf=1 00:20:31.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.055 issued rwts: total=1167,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:31.055 job2: (groupid=0, jobs=1): err= 0: pid=2089674: Fri Apr 26 08:54:47 2024 00:20:31.055 read: IOPS=20, BW=80.8KiB/s (82.7kB/s)(84.0KiB/1040msec) 00:20:31.055 slat (nsec): min=11600, max=25458, avg=24360.05, stdev=2930.46 00:20:31.055 clat (usec): min=41016, max=43041, avg=42029.45, stdev=410.90 00:20:31.055 lat (usec): min=41041, max=43066, avg=42053.81, stdev=411.38 00:20:31.055 clat percentiles (usec): 00:20:31.055 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:31.055 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:31.055 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:20:31.055 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:20:31.055 | 99.99th=[43254] 00:20:31.055 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:20:31.055 slat (nsec): min=11491, max=49676, avg=12301.97, stdev=1815.72 00:20:31.055 clat (usec): min=213, max=761, avg=291.74, stdev=88.65 00:20:31.055 lat (usec): min=226, max=810, avg=304.04, stdev=89.16 00:20:31.055 clat percentiles (usec): 00:20:31.055 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 235], 00:20:31.055 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 273], 00:20:31.055 | 70.00th=[ 285], 80.00th=[ 322], 90.00th=[ 412], 95.00th=[ 482], 00:20:31.055 | 99.00th=[ 603], 99.50th=[ 603], 99.90th=[ 758], 99.95th=[ 758], 00:20:31.055 | 99.99th=[ 758] 00:20:31.055 bw ( KiB/s): min= 4096, max= 4096, per=26.57%, avg=4096.00, stdev= 0.00, samples=1 00:20:31.055 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:31.055 lat (usec) : 250=38.65%, 500=53.10%, 750=4.13%, 1000=0.19% 00:20:31.055 lat (msec) : 50=3.94% 00:20:31.055 cpu : usr=0.38%, sys=0.58%, ctx=533, majf=0, minf=1 00:20:31.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.055 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:31.055 job3: (groupid=0, jobs=1): err= 0: pid=2089682: Fri Apr 26 08:54:47 2024 00:20:31.055 read: IOPS=20, BW=80.8KiB/s (82.8kB/s)(84.0KiB/1039msec) 00:20:31.055 slat (nsec): min=11409, max=26011, avg=24989.86, stdev=3118.27 00:20:31.055 clat (usec): min=41373, max=42950, avg=41992.51, stdev=261.72 00:20:31.055 lat (usec): min=41385, max=42976, avg=42017.50, stdev=263.48 00:20:31.055 clat percentiles (usec): 00:20:31.055 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:20:31.055 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:20:31.055 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:20:31.055 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:20:31.055 | 99.99th=[42730] 00:20:31.055 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:20:31.055 slat (nsec): min=11670, max=41490, avg=12642.90, stdev=1856.32 00:20:31.055 clat (usec): min=209, max=1934, avg=284.70, stdev=122.20 00:20:31.055 lat (usec): min=221, max=1947, avg=297.35, stdev=122.58 00:20:31.055 clat percentiles (usec): 00:20:31.055 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:20:31.055 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 249], 00:20:31.055 | 70.00th=[ 258], 80.00th=[ 297], 90.00th=[ 388], 95.00th=[ 603], 00:20:31.055 | 99.00th=[ 619], 99.50th=[ 627], 99.90th=[ 1942], 99.95th=[ 1942], 00:20:31.055 | 99.99th=[ 1942] 00:20:31.055 bw ( KiB/s): min= 4096, max= 4096, per=26.57%, avg=4096.00, stdev= 0.00, samples=1 00:20:31.055 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:31.055 lat (usec) : 250=59.47%, 500=31.14%, 750=5.25% 00:20:31.055 lat (msec) : 2=0.19%, 50=3.94% 00:20:31.055 cpu : usr=0.00%, sys=1.06%, ctx=535, majf=0, minf=2 00:20:31.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:31.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:31.055 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:31.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:31.055 00:20:31.055 Run status group 0 (all jobs): 00:20:31.055 READ: bw=8588KiB/s (8795kB/s), 80.8KiB/s-4663KiB/s (82.7kB/s-4775kB/s), io=8932KiB (9146kB), run=1001-1040msec 00:20:31.055 WRITE: bw=15.1MiB/s (15.8MB/s), 1969KiB/s-6138KiB/s (2016kB/s-6285kB/s), io=15.7MiB (16.4MB), run=1001-1040msec 00:20:31.055 00:20:31.055 Disk stats (read/write): 00:20:31.055 nvme0n1: ios=1020/1024, merge=0/0, ticks=1393/296, in_queue=1689, util=91.18% 00:20:31.055 nvme0n2: ios=1074/1050, merge=0/0, ticks=710/279, in_queue=989, util=95.09% 00:20:31.055 nvme0n3: ios=72/512, merge=0/0, ticks=766/149, in_queue=915, util=94.12% 00:20:31.055 nvme0n4: ios=40/512, merge=0/0, ticks=957/140, in_queue=1097, util=97.18% 00:20:31.055 08:54:47 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:31.055 [global] 00:20:31.055 thread=1 00:20:31.056 invalidate=1 00:20:31.056 rw=write 00:20:31.056 time_based=1 00:20:31.056 runtime=1 00:20:31.056 ioengine=libaio 00:20:31.056 direct=1 00:20:31.056 bs=4096 00:20:31.056 iodepth=128 00:20:31.056 norandommap=0 00:20:31.056 numjobs=1 00:20:31.056 00:20:31.056 verify_dump=1 00:20:31.056 verify_backlog=512 00:20:31.056 verify_state_save=0 00:20:31.056 do_verify=1 00:20:31.056 verify=crc32c-intel 00:20:31.056 [job0] 00:20:31.056 filename=/dev/nvme0n1 00:20:31.056 [job1] 00:20:31.056 filename=/dev/nvme0n2 00:20:31.056 [job2] 00:20:31.056 filename=/dev/nvme0n3 00:20:31.056 [job3] 00:20:31.056 filename=/dev/nvme0n4 00:20:31.056 Could not set queue depth (nvme0n1) 00:20:31.056 Could not set queue depth (nvme0n2) 00:20:31.056 Could not set queue depth (nvme0n3) 00:20:31.056 Could not set queue depth (nvme0n4) 00:20:31.313 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:31.313 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:31.313 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:31.313 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:31.313 fio-3.35 00:20:31.313 Starting 4 threads 00:20:32.705 00:20:32.705 job0: (groupid=0, jobs=1): err= 0: pid=2090088: Fri Apr 26 08:54:49 2024 00:20:32.705 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:20:32.705 slat (nsec): min=1708, max=33966k, avg=163459.49, stdev=1439475.85 00:20:32.705 clat (usec): min=5998, max=65323, avg=22177.00, stdev=11980.91 00:20:32.705 lat (usec): min=6006, max=65327, avg=22340.46, stdev=12087.21 00:20:32.705 clat percentiles (usec): 00:20:32.705 | 1.00th=[ 6194], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[11731], 00:20:32.705 | 30.00th=[12387], 40.00th=[14877], 50.00th=[18482], 60.00th=[24511], 00:20:32.705 | 70.00th=[27395], 80.00th=[31327], 90.00th=[42206], 95.00th=[45351], 00:20:32.705 | 99.00th=[55837], 99.50th=[55837], 99.90th=[65274], 99.95th=[65274], 00:20:32.705 | 99.99th=[65274] 00:20:32.706 write: IOPS=3360, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1004msec); 0 zone resets 00:20:32.706 slat (usec): min=2, max=19132, avg=127.81, stdev=889.73 00:20:32.706 clat (usec): min=2102, max=56202, avg=17353.87, stdev=9426.15 00:20:32.706 lat (usec): min=2143, max=56206, avg=17481.68, stdev=9485.55 00:20:32.706 clat percentiles (usec): 00:20:32.706 | 1.00th=[ 5145], 5.00th=[ 7439], 10.00th=[ 8225], 20.00th=[10290], 00:20:32.706 | 30.00th=[10945], 40.00th=[12387], 50.00th=[15533], 60.00th=[17695], 00:20:32.706 | 70.00th=[20317], 80.00th=[22938], 90.00th=[28443], 95.00th=[35914], 00:20:32.706 | 99.00th=[51119], 99.50th=[53216], 99.90th=[56361], 99.95th=[56361], 00:20:32.706 | 99.99th=[56361] 00:20:32.706 bw ( KiB/s): min=12263, max=13688, per=19.83%, avg=12975.50, stdev=1007.63, samples=2 00:20:32.706 iops : min= 3065, max= 3422, avg=3243.50, stdev=252.44, samples=2 00:20:32.706 lat (msec) : 4=0.11%, 10=13.12%, 20=48.26%, 50=35.45%, 100=3.06% 00:20:32.706 cpu : usr=1.89%, sys=4.09%, ctx=281, majf=0, minf=1 00:20:32.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:20:32.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:32.706 issued rwts: total=3072,3374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:32.706 job1: (groupid=0, jobs=1): err= 0: pid=2090098: Fri Apr 26 08:54:49 2024 00:20:32.706 read: IOPS=4304, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1006msec) 00:20:32.706 slat (nsec): min=1763, max=12240k, avg=87940.99, stdev=659989.32 00:20:32.706 clat (usec): min=1347, max=30727, avg=12837.76, stdev=4479.16 00:20:32.706 lat (usec): min=1355, max=30734, avg=12925.70, stdev=4516.03 00:20:32.706 clat percentiles (usec): 00:20:32.706 | 1.00th=[ 1663], 5.00th=[ 5604], 10.00th=[ 9241], 20.00th=[10421], 00:20:32.706 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11994], 60.00th=[12649], 00:20:32.706 | 70.00th=[13698], 80.00th=[16319], 90.00th=[19006], 95.00th=[20841], 00:20:32.706 | 99.00th=[27132], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:20:32.706 | 99.99th=[30802] 00:20:32.706 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:20:32.706 slat (usec): min=2, max=39407, avg=107.93, stdev=836.27 00:20:32.706 clat (usec): min=1385, max=43967, avg=14544.08, stdev=6653.52 00:20:32.706 lat (usec): min=1495, max=43972, avg=14652.01, stdev=6688.84 00:20:32.706 clat percentiles (usec): 00:20:32.706 | 1.00th=[ 4948], 5.00th=[ 6849], 10.00th=[ 7635], 20.00th=[ 8848], 00:20:32.706 | 30.00th=[10159], 40.00th=[11600], 50.00th=[13435], 60.00th=[15533], 00:20:32.706 | 70.00th=[16909], 80.00th=[19006], 90.00th=[22152], 95.00th=[25822], 00:20:32.706 | 99.00th=[40109], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:20:32.706 | 99.99th=[43779] 00:20:32.706 bw ( KiB/s): min=16384, max=20480, per=28.16%, avg=18432.00, stdev=2896.31, samples=2 00:20:32.706 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:20:32.706 lat (msec) : 2=0.55%, 4=1.04%, 10=21.03%, 20=65.45%, 50=11.93% 00:20:32.706 cpu : usr=3.38%, sys=4.58%, ctx=604, majf=0, minf=1 00:20:32.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:32.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:32.706 issued rwts: total=4330,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:32.706 job2: (groupid=0, jobs=1): err= 0: pid=2090115: Fri Apr 26 08:54:49 2024 00:20:32.706 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:20:32.706 slat (nsec): min=1795, max=19634k, avg=117593.77, stdev=761691.76 00:20:32.706 clat (usec): min=2246, max=42223, avg=15883.12, stdev=6748.30 00:20:32.706 lat (usec): min=2255, max=42234, avg=16000.72, stdev=6790.49 00:20:32.706 clat percentiles (usec): 00:20:32.706 | 1.00th=[ 6194], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10814], 00:20:32.706 | 30.00th=[11469], 40.00th=[12649], 50.00th=[14353], 60.00th=[15926], 00:20:32.706 | 70.00th=[16909], 80.00th=[19268], 90.00th=[24773], 95.00th=[30278], 00:20:32.706 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:32.706 | 99.99th=[42206] 00:20:32.706 write: IOPS=4390, BW=17.1MiB/s (18.0MB/s)(17.3MiB/1009msec); 0 zone resets 00:20:32.706 slat (usec): min=2, max=12849, avg=103.26, stdev=659.46 00:20:32.706 clat (usec): min=1513, max=54826, avg=14201.67, stdev=7874.17 00:20:32.706 lat (usec): min=1527, max=54831, avg=14304.92, stdev=7899.43 00:20:32.706 clat percentiles (usec): 00:20:32.706 | 1.00th=[ 3556], 5.00th=[ 5473], 10.00th=[ 6390], 20.00th=[ 8717], 00:20:32.706 | 30.00th=[ 9634], 40.00th=[10945], 50.00th=[12125], 60.00th=[14222], 00:20:32.706 | 70.00th=[16188], 80.00th=[19268], 90.00th=[22414], 95.00th=[27395], 00:20:32.706 | 99.00th=[49021], 99.50th=[51643], 99.90th=[54789], 99.95th=[54789], 00:20:32.706 | 99.99th=[54789] 00:20:32.706 bw ( KiB/s): min=13944, max=20439, per=26.27%, avg=17191.50, stdev=4592.66, samples=2 00:20:32.706 iops : min= 3486, max= 5109, avg=4297.50, stdev=1147.63, samples=2 00:20:32.706 lat (msec) : 2=0.02%, 4=0.62%, 10=21.76%, 20=59.68%, 50=17.48% 00:20:32.706 lat (msec) : 100=0.45% 00:20:32.706 cpu : usr=3.08%, sys=6.15%, ctx=396, majf=0, minf=1 00:20:32.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:32.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:32.706 issued rwts: total=4096,4430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:32.706 job3: (groupid=0, jobs=1): err= 0: pid=2090121: Fri Apr 26 08:54:49 2024 00:20:32.706 read: IOPS=3806, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1007msec) 00:20:32.706 slat (usec): min=2, max=14299, avg=115.66, stdev=812.41 00:20:32.706 clat (usec): min=3193, max=39753, avg=15570.11, stdev=4996.59 00:20:32.706 lat (usec): min=7084, max=39759, avg=15685.78, stdev=5029.95 00:20:32.706 clat percentiles (usec): 00:20:32.706 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11731], 00:20:32.706 | 30.00th=[13042], 40.00th=[13698], 50.00th=[14222], 60.00th=[15533], 00:20:32.706 | 70.00th=[17695], 80.00th=[18744], 90.00th=[21627], 95.00th=[23200], 00:20:32.706 | 99.00th=[35390], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:20:32.706 | 99.99th=[39584] 00:20:32.706 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:20:32.706 slat (usec): min=3, max=11161, avg=128.03, stdev=703.70 00:20:32.706 clat (usec): min=1990, max=39584, avg=16573.94, stdev=6284.03 00:20:32.706 lat (usec): min=2008, max=39592, avg=16701.96, stdev=6309.27 00:20:32.706 clat percentiles (usec): 00:20:32.706 | 1.00th=[ 6718], 5.00th=[ 8848], 10.00th=[10421], 20.00th=[11600], 00:20:32.706 | 30.00th=[12649], 40.00th=[14353], 50.00th=[15270], 60.00th=[16450], 00:20:32.706 | 70.00th=[18220], 80.00th=[19530], 90.00th=[27395], 95.00th=[30016], 00:20:32.706 | 99.00th=[34866], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:20:32.706 | 99.99th=[39584] 00:20:32.706 bw ( KiB/s): min=15584, max=17149, per=25.01%, avg=16366.50, stdev=1106.62, samples=2 00:20:32.706 iops : min= 3896, max= 4287, avg=4091.50, stdev=276.48, samples=2 00:20:32.706 lat (msec) : 2=0.03%, 4=0.01%, 10=8.50%, 20=73.92%, 50=17.54% 00:20:32.706 cpu : usr=3.88%, sys=4.57%, ctx=450, majf=0, minf=1 00:20:32.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:32.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:32.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:32.706 issued rwts: total=3833,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:32.706 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:32.706 00:20:32.706 Run status group 0 (all jobs): 00:20:32.706 READ: bw=59.4MiB/s (62.2MB/s), 12.0MiB/s-16.8MiB/s (12.5MB/s-17.6MB/s), io=59.9MiB (62.8MB), run=1004-1009msec 00:20:32.706 WRITE: bw=63.9MiB/s (67.0MB/s), 13.1MiB/s-17.9MiB/s (13.8MB/s-18.8MB/s), io=64.5MiB (67.6MB), run=1004-1009msec 00:20:32.706 00:20:32.706 Disk stats (read/write): 00:20:32.706 nvme0n1: ios=2447/2560, merge=0/0, ticks=36508/25503, in_queue=62011, util=85.67% 00:20:32.706 nvme0n2: ios=3585/3646, merge=0/0, ticks=44331/53342, in_queue=97673, util=99.59% 00:20:32.706 nvme0n3: ios=3606/3841, merge=0/0, ticks=38958/34150, in_queue=73108, util=97.65% 00:20:32.706 nvme0n4: ios=3099/3532, merge=0/0, ticks=47012/54392, in_queue=101404, util=97.29% 00:20:32.706 08:54:49 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:32.706 [global] 00:20:32.706 thread=1 00:20:32.706 invalidate=1 00:20:32.706 rw=randwrite 00:20:32.706 time_based=1 00:20:32.706 runtime=1 00:20:32.706 ioengine=libaio 00:20:32.706 direct=1 00:20:32.706 bs=4096 00:20:32.706 iodepth=128 00:20:32.706 norandommap=0 00:20:32.706 numjobs=1 00:20:32.706 00:20:32.706 verify_dump=1 00:20:32.706 verify_backlog=512 00:20:32.706 verify_state_save=0 00:20:32.706 do_verify=1 00:20:32.706 verify=crc32c-intel 00:20:32.706 [job0] 00:20:32.706 filename=/dev/nvme0n1 00:20:32.706 [job1] 00:20:32.706 filename=/dev/nvme0n2 00:20:32.706 [job2] 00:20:32.706 filename=/dev/nvme0n3 00:20:32.706 [job3] 00:20:32.706 filename=/dev/nvme0n4 00:20:32.706 Could not set queue depth (nvme0n1) 00:20:32.706 Could not set queue depth (nvme0n2) 00:20:32.706 Could not set queue depth (nvme0n3) 00:20:32.706 Could not set queue depth (nvme0n4) 00:20:32.967 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:32.967 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:32.967 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:32.967 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:32.967 fio-3.35 00:20:32.967 Starting 4 threads 00:20:34.390 00:20:34.390 job0: (groupid=0, jobs=1): err= 0: pid=2090515: Fri Apr 26 08:54:51 2024 00:20:34.390 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:20:34.390 slat (nsec): min=1773, max=21546k, avg=117756.39, stdev=891541.30 00:20:34.390 clat (usec): min=4335, max=53547, avg=16785.13, stdev=8863.53 00:20:34.390 lat (usec): min=4354, max=53574, avg=16902.89, stdev=8929.00 00:20:34.390 clat percentiles (usec): 00:20:34.390 | 1.00th=[ 6521], 5.00th=[ 8029], 10.00th=[ 9503], 20.00th=[10159], 00:20:34.390 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12649], 60.00th=[14353], 00:20:34.390 | 70.00th=[19006], 80.00th=[25822], 90.00th=[28705], 95.00th=[34866], 00:20:34.390 | 99.00th=[44303], 99.50th=[45876], 99.90th=[45876], 99.95th=[47449], 00:20:34.390 | 99.99th=[53740] 00:20:34.390 write: IOPS=3680, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1004msec); 0 zone resets 00:20:34.390 slat (usec): min=2, max=14718, avg=133.84, stdev=787.31 00:20:34.390 clat (usec): min=2166, max=46552, avg=18206.84, stdev=8437.78 00:20:34.390 lat (usec): min=2400, max=46564, avg=18340.68, stdev=8473.78 00:20:34.390 clat percentiles (usec): 00:20:34.390 | 1.00th=[ 3818], 5.00th=[ 6063], 10.00th=[ 7046], 20.00th=[10683], 00:20:34.390 | 30.00th=[13042], 40.00th=[15795], 50.00th=[18482], 60.00th=[20055], 00:20:34.390 | 70.00th=[22676], 80.00th=[24511], 90.00th=[28967], 95.00th=[32113], 00:20:34.390 | 99.00th=[44303], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:20:34.390 | 99.99th=[46400] 00:20:34.390 bw ( KiB/s): min=12288, max=16384, per=24.08%, avg=14336.00, stdev=2896.31, samples=2 00:20:34.390 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:20:34.390 lat (msec) : 4=0.82%, 10=17.13%, 20=48.21%, 50=33.82%, 100=0.01% 00:20:34.390 cpu : usr=2.79%, sys=4.29%, ctx=415, majf=0, minf=1 00:20:34.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:20:34.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:34.390 issued rwts: total=3584,3695,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:34.390 job1: (groupid=0, jobs=1): err= 0: pid=2090531: Fri Apr 26 08:54:51 2024 00:20:34.390 read: IOPS=3939, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1005msec) 00:20:34.390 slat (nsec): min=1748, max=13609k, avg=97113.52, stdev=604810.53 00:20:34.390 clat (usec): min=728, max=33643, avg=13308.48, stdev=6078.85 00:20:34.390 lat (usec): min=4570, max=44751, avg=13405.60, stdev=6111.13 00:20:34.390 clat percentiles (usec): 00:20:34.390 | 1.00th=[ 4948], 5.00th=[ 6128], 10.00th=[ 8029], 20.00th=[ 8848], 00:20:34.390 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11076], 60.00th=[12125], 00:20:34.390 | 70.00th=[15008], 80.00th=[17171], 90.00th=[22152], 95.00th=[26608], 00:20:34.390 | 99.00th=[33162], 99.50th=[33162], 99.90th=[33817], 99.95th=[33817], 00:20:34.390 | 99.99th=[33817] 00:20:34.390 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:20:34.390 slat (usec): min=2, max=12542, avg=142.25, stdev=610.69 00:20:34.390 clat (usec): min=3323, max=41849, avg=18085.04, stdev=6279.22 00:20:34.390 lat (usec): min=3334, max=42278, avg=18227.29, stdev=6319.49 00:20:34.390 clat percentiles (usec): 00:20:34.390 | 1.00th=[ 4883], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[13304], 00:20:34.390 | 30.00th=[15795], 40.00th=[16909], 50.00th=[17957], 60.00th=[19006], 00:20:34.390 | 70.00th=[19792], 80.00th=[21103], 90.00th=[24511], 95.00th=[29492], 00:20:34.390 | 99.00th=[40109], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:20:34.390 | 99.99th=[41681] 00:20:34.390 bw ( KiB/s): min=16384, max=16384, per=27.52%, avg=16384.00, stdev= 0.00, samples=2 00:20:34.390 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:20:34.390 lat (usec) : 750=0.01% 00:20:34.390 lat (msec) : 4=0.04%, 10=20.72%, 20=56.76%, 50=22.47% 00:20:34.390 cpu : usr=1.99%, sys=4.28%, ctx=594, majf=0, minf=1 00:20:34.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:34.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:34.390 issued rwts: total=3959,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:34.390 job2: (groupid=0, jobs=1): err= 0: pid=2090553: Fri Apr 26 08:54:51 2024 00:20:34.390 read: IOPS=4522, BW=17.7MiB/s (18.5MB/s)(18.0MiB/1019msec) 00:20:34.390 slat (nsec): min=1839, max=9969.4k, avg=94262.72, stdev=653733.16 00:20:34.390 clat (usec): min=6299, max=36055, avg=13276.18, stdev=4139.59 00:20:34.390 lat (usec): min=6884, max=43008, avg=13370.44, stdev=4170.50 00:20:34.390 clat percentiles (usec): 00:20:34.390 | 1.00th=[ 7439], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10028], 00:20:34.390 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12256], 60.00th=[13435], 00:20:34.390 | 70.00th=[14353], 80.00th=[16450], 90.00th=[18220], 95.00th=[21365], 00:20:34.390 | 99.00th=[25297], 99.50th=[31065], 99.90th=[33162], 99.95th=[33162], 00:20:34.390 | 99.99th=[35914] 00:20:34.390 write: IOPS=4724, BW=18.5MiB/s (19.3MB/s)(18.8MiB/1019msec); 0 zone resets 00:20:34.390 slat (usec): min=2, max=17355, avg=105.32, stdev=670.14 00:20:34.390 clat (usec): min=3023, max=40826, avg=14130.50, stdev=5220.70 00:20:34.390 lat (usec): min=4214, max=40834, avg=14235.82, stdev=5229.39 00:20:34.390 clat percentiles (usec): 00:20:34.390 | 1.00th=[ 6063], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[10159], 00:20:34.390 | 30.00th=[10945], 40.00th=[12125], 50.00th=[13173], 60.00th=[14615], 00:20:34.390 | 70.00th=[15926], 80.00th=[17695], 90.00th=[20055], 95.00th=[22414], 00:20:34.390 | 99.00th=[32637], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:20:34.390 | 99.99th=[40633] 00:20:34.390 bw ( KiB/s): min=17968, max=19528, per=31.49%, avg=18748.00, stdev=1103.09, samples=2 00:20:34.390 iops : min= 4492, max= 4882, avg=4687.00, stdev=275.77, samples=2 00:20:34.390 lat (msec) : 4=0.02%, 10=18.97%, 20=72.10%, 50=8.92% 00:20:34.390 cpu : usr=5.50%, sys=5.89%, ctx=514, majf=0, minf=1 00:20:34.390 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:20:34.390 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.390 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:34.390 issued rwts: total=4608,4814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.390 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:34.390 job3: (groupid=0, jobs=1): err= 0: pid=2090561: Fri Apr 26 08:54:51 2024 00:20:34.390 read: IOPS=2497, BW=9990KiB/s (10.2MB/s)(9.82MiB/1007msec) 00:20:34.390 slat (usec): min=2, max=20672, avg=183.10, stdev=1217.34 00:20:34.390 clat (usec): min=2939, max=58864, avg=24952.77, stdev=10728.39 00:20:34.390 lat (usec): min=6559, max=58890, avg=25135.87, stdev=10806.92 00:20:34.390 clat percentiles (usec): 00:20:34.390 | 1.00th=[ 6980], 5.00th=[10421], 10.00th=[11207], 20.00th=[13435], 00:20:34.390 | 30.00th=[18220], 40.00th=[22938], 50.00th=[25035], 60.00th=[27919], 00:20:34.390 | 70.00th=[29754], 80.00th=[34866], 90.00th=[39584], 95.00th=[45351], 00:20:34.390 | 99.00th=[48497], 99.50th=[50594], 99.90th=[54789], 99.95th=[55313], 00:20:34.390 | 99.99th=[58983] 00:20:34.390 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:20:34.390 slat (usec): min=2, max=27963, avg=196.60, stdev=1277.21 00:20:34.390 clat (usec): min=3501, max=60921, avg=25372.64, stdev=11239.87 00:20:34.390 lat (usec): min=3516, max=60938, avg=25569.24, stdev=11315.16 00:20:34.390 clat percentiles (usec): 00:20:34.391 | 1.00th=[ 3949], 5.00th=[10552], 10.00th=[12256], 20.00th=[16319], 00:20:34.391 | 30.00th=[19530], 40.00th=[20841], 50.00th=[23200], 60.00th=[25560], 00:20:34.391 | 70.00th=[28181], 80.00th=[33817], 90.00th=[41681], 95.00th=[45876], 00:20:34.391 | 99.00th=[56361], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:20:34.391 | 99.99th=[61080] 00:20:34.391 bw ( KiB/s): min= 8192, max=12288, per=17.20%, avg=10240.00, stdev=2896.31, samples=2 00:20:34.391 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:20:34.391 lat (msec) : 4=0.53%, 10=3.05%, 20=28.81%, 50=65.38%, 100=2.23% 00:20:34.391 cpu : usr=2.78%, sys=4.17%, ctx=266, majf=0, minf=1 00:20:34.391 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:34.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:34.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:34.391 issued rwts: total=2515,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:34.391 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:34.391 00:20:34.391 Run status group 0 (all jobs): 00:20:34.391 READ: bw=56.2MiB/s (59.0MB/s), 9990KiB/s-17.7MiB/s (10.2MB/s-18.5MB/s), io=57.3MiB (60.1MB), run=1004-1019msec 00:20:34.391 WRITE: bw=58.1MiB/s (61.0MB/s), 9.93MiB/s-18.5MiB/s (10.4MB/s-19.3MB/s), io=59.2MiB (62.1MB), run=1004-1019msec 00:20:34.391 00:20:34.391 Disk stats (read/write): 00:20:34.391 nvme0n1: ios=2991/3072, merge=0/0, ticks=33740/34479, in_queue=68219, util=99.00% 00:20:34.391 nvme0n2: ios=3174/3584, merge=0/0, ticks=21391/27361, in_queue=48752, util=97.45% 00:20:34.391 nvme0n3: ios=3628/3847, merge=0/0, ticks=49794/52127, in_queue=101921, util=97.24% 00:20:34.391 nvme0n4: ios=2106/2231, merge=0/0, ticks=26815/31494, in_queue=58309, util=99.78% 00:20:34.391 08:54:51 -- target/fio.sh@55 -- # sync 00:20:34.391 08:54:51 -- target/fio.sh@59 -- # fio_pid=2090656 00:20:34.391 08:54:51 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:34.391 08:54:51 -- target/fio.sh@61 -- # sleep 3 00:20:34.391 [global] 00:20:34.391 thread=1 00:20:34.391 invalidate=1 00:20:34.391 rw=read 00:20:34.391 time_based=1 00:20:34.391 runtime=10 00:20:34.391 ioengine=libaio 00:20:34.391 direct=1 00:20:34.391 bs=4096 00:20:34.391 iodepth=1 00:20:34.391 norandommap=1 00:20:34.391 numjobs=1 00:20:34.391 00:20:34.391 [job0] 00:20:34.391 filename=/dev/nvme0n1 00:20:34.391 [job1] 00:20:34.391 filename=/dev/nvme0n2 00:20:34.391 [job2] 00:20:34.391 filename=/dev/nvme0n3 00:20:34.391 [job3] 00:20:34.391 filename=/dev/nvme0n4 00:20:34.391 Could not set queue depth (nvme0n1) 00:20:34.391 Could not set queue depth (nvme0n2) 00:20:34.391 Could not set queue depth (nvme0n3) 00:20:34.391 Could not set queue depth (nvme0n4) 00:20:34.649 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:34.649 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:34.649 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:34.649 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:34.649 fio-3.35 00:20:34.649 Starting 4 threads 00:20:37.201 08:54:54 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:37.201 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=9482240, buflen=4096 00:20:37.201 fio: pid=2091036, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:37.460 08:54:54 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:37.460 08:54:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:37.460 08:54:54 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:37.460 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=19394560, buflen=4096 00:20:37.460 fio: pid=2091026, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:37.718 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=19492864, buflen=4096 00:20:37.718 fio: pid=2090975, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:37.718 08:54:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:37.718 08:54:54 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:37.977 08:54:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:37.977 08:54:55 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:37.977 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=7135232, buflen=4096 00:20:37.977 fio: pid=2090995, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:20:37.977 00:20:37.977 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2090975: Fri Apr 26 08:54:55 2024 00:20:37.977 read: IOPS=1614, BW=6455KiB/s (6610kB/s)(18.6MiB/2949msec) 00:20:37.977 slat (usec): min=8, max=14068, avg=22.64, stdev=340.76 00:20:37.977 clat (usec): min=313, max=1551, avg=590.40, stdev=152.49 00:20:37.977 lat (usec): min=322, max=14490, avg=613.05, stdev=375.67 00:20:37.977 clat percentiles (usec): 00:20:37.977 | 1.00th=[ 351], 5.00th=[ 388], 10.00th=[ 433], 20.00th=[ 494], 00:20:37.977 | 30.00th=[ 498], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 586], 00:20:37.977 | 70.00th=[ 627], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 881], 00:20:37.977 | 99.00th=[ 1106], 99.50th=[ 1221], 99.90th=[ 1352], 99.95th=[ 1369], 00:20:37.977 | 99.99th=[ 1549] 00:20:37.977 bw ( KiB/s): min= 5104, max= 7424, per=37.10%, avg=6348.80, stdev=1061.81, samples=5 00:20:37.977 iops : min= 1276, max= 1856, avg=1587.20, stdev=265.45, samples=5 00:20:37.977 lat (usec) : 500=31.97%, 750=55.74%, 1000=9.75% 00:20:37.977 lat (msec) : 2=2.52% 00:20:37.977 cpu : usr=0.95%, sys=2.44%, ctx=4769, majf=0, minf=1 00:20:37.977 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.977 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.977 issued rwts: total=4760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.977 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:37.977 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2090995: Fri Apr 26 08:54:55 2024 00:20:37.977 read: IOPS=550, BW=2199KiB/s (2252kB/s)(6968KiB/3168msec) 00:20:37.977 slat (usec): min=8, max=35631, avg=52.68, stdev=977.61 00:20:37.977 clat (usec): min=309, max=48442, avg=1763.79, stdev=6955.92 00:20:37.977 lat (usec): min=318, max=54128, avg=1812.23, stdev=7079.05 00:20:37.977 clat percentiles (usec): 00:20:37.977 | 1.00th=[ 330], 5.00th=[ 355], 10.00th=[ 371], 20.00th=[ 445], 00:20:37.977 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:20:37.977 | 70.00th=[ 570], 80.00th=[ 603], 90.00th=[ 750], 95.00th=[ 1057], 00:20:37.977 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[48497], 00:20:37.977 | 99.99th=[48497] 00:20:37.977 bw ( KiB/s): min= 88, max= 6440, per=13.32%, avg=2279.67, stdev=2492.94, samples=6 00:20:37.977 iops : min= 22, max= 1610, avg=569.83, stdev=623.17, samples=6 00:20:37.977 lat (usec) : 500=26.45%, 750=63.63%, 1000=3.21% 00:20:37.977 lat (msec) : 2=3.56%, 4=0.11%, 20=0.11%, 50=2.87% 00:20:37.977 cpu : usr=0.35%, sys=0.82%, ctx=1748, majf=0, minf=1 00:20:37.977 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.977 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.977 issued rwts: total=1743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.977 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:37.977 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2091026: Fri Apr 26 08:54:55 2024 00:20:37.977 read: IOPS=1695, BW=6781KiB/s (6944kB/s)(18.5MiB/2793msec) 00:20:37.977 slat (nsec): min=5185, max=39352, avg=11147.84, stdev=4777.46 00:20:37.977 clat (usec): min=357, max=43051, avg=572.47, stdev=1361.16 00:20:37.977 lat (usec): min=366, max=43076, avg=583.62, stdev=1361.72 00:20:37.977 clat percentiles (usec): 00:20:37.977 | 1.00th=[ 404], 5.00th=[ 433], 10.00th=[ 445], 20.00th=[ 465], 00:20:37.977 | 30.00th=[ 490], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 519], 00:20:37.977 | 70.00th=[ 537], 80.00th=[ 562], 90.00th=[ 644], 95.00th=[ 701], 00:20:37.977 | 99.00th=[ 930], 99.50th=[ 979], 99.90th=[41681], 99.95th=[42206], 00:20:37.977 | 99.99th=[43254] 00:20:37.977 bw ( KiB/s): min= 6848, max= 8184, per=43.08%, avg=7371.20, stdev=491.58, samples=5 00:20:37.977 iops : min= 1712, max= 2046, avg=1842.80, stdev=122.90, samples=5 00:20:37.977 lat (usec) : 500=46.79%, 750=50.02%, 1000=2.77% 00:20:37.977 lat (msec) : 2=0.30%, 50=0.11% 00:20:37.977 cpu : usr=0.72%, sys=2.40%, ctx=4736, majf=0, minf=1 00:20:37.977 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.977 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.977 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:37.978 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2091036: Fri Apr 26 08:54:55 2024 00:20:37.978 read: IOPS=905, BW=3619KiB/s (3705kB/s)(9260KiB/2559msec) 00:20:37.978 slat (nsec): min=8666, max=43435, avg=11698.22, stdev=5770.15 00:20:37.978 clat (usec): min=455, max=42966, avg=1087.73, stdev=4688.76 00:20:37.978 lat (usec): min=465, max=42991, avg=1099.43, stdev=4690.25 00:20:37.978 clat percentiles (usec): 00:20:37.978 | 1.00th=[ 482], 5.00th=[ 490], 10.00th=[ 494], 20.00th=[ 498], 00:20:37.978 | 30.00th=[ 502], 40.00th=[ 506], 50.00th=[ 510], 60.00th=[ 515], 00:20:37.978 | 70.00th=[ 519], 80.00th=[ 586], 90.00th=[ 701], 95.00th=[ 799], 00:20:37.978 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:20:37.978 | 99.99th=[42730] 00:20:37.978 bw ( KiB/s): min= 96, max= 7680, per=20.80%, avg=3560.00, stdev=3752.35, samples=5 00:20:37.978 iops : min= 24, max= 1920, avg=890.00, stdev=938.09, samples=5 00:20:37.978 lat (usec) : 500=24.61%, 750=69.30%, 1000=3.28% 00:20:37.978 lat (msec) : 2=1.47%, 50=1.30% 00:20:37.978 cpu : usr=0.47%, sys=1.25%, ctx=2317, majf=0, minf=2 00:20:37.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.978 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.978 issued rwts: total=2316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:37.978 00:20:37.978 Run status group 0 (all jobs): 00:20:37.978 READ: bw=16.7MiB/s (17.5MB/s), 2199KiB/s-6781KiB/s (2252kB/s-6944kB/s), io=52.9MiB (55.5MB), run=2559-3168msec 00:20:37.978 00:20:37.978 Disk stats (read/write): 00:20:37.978 nvme0n1: ios=4520/0, merge=0/0, ticks=3410/0, in_queue=3410, util=99.23% 00:20:37.978 nvme0n2: ios=1738/0, merge=0/0, ticks=2938/0, in_queue=2938, util=92.41% 00:20:37.978 nvme0n3: ios=4736/0, merge=0/0, ticks=2685/0, in_queue=2685, util=95.93% 00:20:37.978 nvme0n4: ios=2320/0, merge=0/0, ticks=2707/0, in_queue=2707, util=99.39% 00:20:37.978 08:54:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:37.978 08:54:55 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:38.236 08:54:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:38.236 08:54:55 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:38.495 08:54:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:38.495 08:54:55 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:38.754 08:54:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:38.754 08:54:55 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:38.754 08:54:55 -- target/fio.sh@69 -- # fio_status=0 00:20:38.754 08:54:55 -- target/fio.sh@70 -- # wait 2090656 00:20:38.754 08:54:55 -- target/fio.sh@70 -- # fio_status=4 00:20:38.754 08:54:55 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:39.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:39.013 08:54:56 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:39.013 08:54:56 -- common/autotest_common.sh@1205 -- # local i=0 00:20:39.013 08:54:56 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:39.013 08:54:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:39.013 08:54:56 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:39.013 08:54:56 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:39.013 08:54:56 -- common/autotest_common.sh@1217 -- # return 0 00:20:39.013 08:54:56 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:39.013 08:54:56 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:39.013 nvmf hotplug test: fio failed as expected 00:20:39.013 08:54:56 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.272 08:54:56 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:39.272 08:54:56 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:39.272 08:54:56 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:39.272 08:54:56 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:39.272 08:54:56 -- target/fio.sh@91 -- # nvmftestfini 00:20:39.272 08:54:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:39.272 08:54:56 -- nvmf/common.sh@117 -- # sync 00:20:39.272 08:54:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.272 08:54:56 -- nvmf/common.sh@120 -- # set +e 00:20:39.272 08:54:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.272 08:54:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.272 rmmod nvme_tcp 00:20:39.272 rmmod nvme_fabrics 00:20:39.272 rmmod nvme_keyring 00:20:39.272 08:54:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.272 08:54:56 -- nvmf/common.sh@124 -- # set -e 00:20:39.272 08:54:56 -- nvmf/common.sh@125 -- # return 0 00:20:39.272 08:54:56 -- nvmf/common.sh@478 -- # '[' -n 2087781 ']' 00:20:39.272 08:54:56 -- nvmf/common.sh@479 -- # killprocess 2087781 00:20:39.272 08:54:56 -- common/autotest_common.sh@936 -- # '[' -z 2087781 ']' 00:20:39.272 08:54:56 -- common/autotest_common.sh@940 -- # kill -0 2087781 00:20:39.272 08:54:56 -- common/autotest_common.sh@941 -- # uname 00:20:39.272 08:54:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.272 08:54:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2087781 00:20:39.272 08:54:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:39.272 08:54:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:39.272 08:54:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2087781' 00:20:39.272 killing process with pid 2087781 00:20:39.272 08:54:56 -- common/autotest_common.sh@955 -- # kill 2087781 00:20:39.272 08:54:56 -- common/autotest_common.sh@960 -- # wait 2087781 00:20:39.531 08:54:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:39.531 08:54:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:39.531 08:54:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:39.531 08:54:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.531 08:54:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.531 08:54:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.531 08:54:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.531 08:54:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.067 08:54:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.067 00:20:42.067 real 0m27.968s 00:20:42.067 user 2m2.577s 00:20:42.067 sys 0m9.990s 00:20:42.067 08:54:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:42.067 08:54:58 -- common/autotest_common.sh@10 -- # set +x 00:20:42.067 ************************************ 00:20:42.067 END TEST nvmf_fio_target 00:20:42.067 ************************************ 00:20:42.067 08:54:58 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:42.067 08:54:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:42.067 08:54:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.067 08:54:58 -- common/autotest_common.sh@10 -- # set +x 00:20:42.067 ************************************ 00:20:42.067 START TEST nvmf_bdevio 00:20:42.067 ************************************ 00:20:42.067 08:54:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:42.067 * Looking for test storage... 00:20:42.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.067 08:54:59 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.067 08:54:59 -- nvmf/common.sh@7 -- # uname -s 00:20:42.067 08:54:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.067 08:54:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.067 08:54:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.067 08:54:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.067 08:54:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.067 08:54:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.067 08:54:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.067 08:54:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.067 08:54:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.067 08:54:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.067 08:54:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:42.067 08:54:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:42.067 08:54:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.067 08:54:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.067 08:54:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.067 08:54:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.067 08:54:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.067 08:54:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.067 08:54:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.067 08:54:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.068 08:54:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.068 08:54:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.068 08:54:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.068 08:54:59 -- paths/export.sh@5 -- # export PATH 00:20:42.068 08:54:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.068 08:54:59 -- nvmf/common.sh@47 -- # : 0 00:20:42.068 08:54:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.068 08:54:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.068 08:54:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.068 08:54:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.068 08:54:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.068 08:54:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.068 08:54:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.068 08:54:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.068 08:54:59 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:42.068 08:54:59 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:42.068 08:54:59 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:42.068 08:54:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:42.068 08:54:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.068 08:54:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:42.068 08:54:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:42.068 08:54:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:42.068 08:54:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.068 08:54:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.068 08:54:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.068 08:54:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:42.068 08:54:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:42.068 08:54:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.068 08:54:59 -- common/autotest_common.sh@10 -- # set +x 00:20:48.625 08:55:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:48.625 08:55:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:48.626 08:55:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:48.626 08:55:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:48.626 08:55:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:48.626 08:55:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:48.626 08:55:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:48.626 08:55:05 -- nvmf/common.sh@295 -- # net_devs=() 00:20:48.626 08:55:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:48.626 08:55:05 -- nvmf/common.sh@296 -- # e810=() 00:20:48.626 08:55:05 -- nvmf/common.sh@296 -- # local -ga e810 00:20:48.626 08:55:05 -- nvmf/common.sh@297 -- # x722=() 00:20:48.626 08:55:05 -- nvmf/common.sh@297 -- # local -ga x722 00:20:48.626 08:55:05 -- nvmf/common.sh@298 -- # mlx=() 00:20:48.626 08:55:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:48.626 08:55:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.626 08:55:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.626 08:55:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.626 08:55:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.626 08:55:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.626 08:55:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.626 08:55:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.626 08:55:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.626 08:55:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.626 08:55:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.626 08:55:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.626 08:55:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:48.626 08:55:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:48.626 08:55:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:48.626 08:55:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.626 08:55:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:48.626 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:48.626 08:55:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.626 08:55:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:48.626 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:48.626 08:55:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:48.626 08:55:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.626 08:55:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.626 08:55:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:48.626 08:55:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.626 08:55:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:48.626 Found net devices under 0000:af:00.0: cvl_0_0 00:20:48.626 08:55:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.626 08:55:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.626 08:55:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.626 08:55:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:48.626 08:55:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.626 08:55:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:48.626 Found net devices under 0000:af:00.1: cvl_0_1 00:20:48.626 08:55:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.626 08:55:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:48.626 08:55:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:48.626 08:55:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:48.626 08:55:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:48.626 08:55:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.626 08:55:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.626 08:55:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.626 08:55:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:48.626 08:55:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.626 08:55:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.626 08:55:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:48.626 08:55:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.626 08:55:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.626 08:55:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:48.626 08:55:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:48.626 08:55:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.626 08:55:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.626 08:55:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.626 08:55:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.626 08:55:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:48.626 08:55:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.626 08:55:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.626 08:55:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.626 08:55:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:48.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:20:48.626 00:20:48.626 --- 10.0.0.2 ping statistics --- 00:20:48.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.626 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:20:48.626 08:55:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:20:48.920 00:20:48.920 --- 10.0.0.1 ping statistics --- 00:20:48.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.920 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:20:48.920 08:55:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.920 08:55:05 -- nvmf/common.sh@411 -- # return 0 00:20:48.920 08:55:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:48.920 08:55:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.920 08:55:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:48.920 08:55:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:48.920 08:55:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.920 08:55:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:48.920 08:55:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:48.920 08:55:05 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:48.920 08:55:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:48.920 08:55:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:48.920 08:55:05 -- common/autotest_common.sh@10 -- # set +x 00:20:48.920 08:55:05 -- nvmf/common.sh@470 -- # nvmfpid=2095525 00:20:48.920 08:55:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:48.920 08:55:05 -- nvmf/common.sh@471 -- # waitforlisten 2095525 00:20:48.920 08:55:05 -- common/autotest_common.sh@817 -- # '[' -z 2095525 ']' 00:20:48.920 08:55:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.920 08:55:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:48.920 08:55:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.920 08:55:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:48.920 08:55:05 -- common/autotest_common.sh@10 -- # set +x 00:20:48.920 [2024-04-26 08:55:05.967035] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:20:48.920 [2024-04-26 08:55:05.967083] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.920 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.920 [2024-04-26 08:55:06.041796] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.920 [2024-04-26 08:55:06.113370] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.920 [2024-04-26 08:55:06.113411] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.920 [2024-04-26 08:55:06.113420] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.920 [2024-04-26 08:55:06.113429] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.921 [2024-04-26 08:55:06.113436] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.921 [2024-04-26 08:55:06.113583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:48.921 [2024-04-26 08:55:06.113661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:48.921 [2024-04-26 08:55:06.113767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.921 [2024-04-26 08:55:06.113768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:49.857 08:55:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:49.857 08:55:06 -- common/autotest_common.sh@850 -- # return 0 00:20:49.857 08:55:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:49.857 08:55:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:49.857 08:55:06 -- common/autotest_common.sh@10 -- # set +x 00:20:49.857 08:55:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.857 08:55:06 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:49.857 08:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.857 08:55:06 -- common/autotest_common.sh@10 -- # set +x 00:20:49.857 [2024-04-26 08:55:06.813136] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.857 08:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.857 08:55:06 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:49.857 08:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.857 08:55:06 -- common/autotest_common.sh@10 -- # set +x 00:20:49.857 Malloc0 00:20:49.857 08:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.857 08:55:06 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:49.857 08:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.857 08:55:06 -- common/autotest_common.sh@10 -- # set +x 00:20:49.857 08:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.857 08:55:06 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:49.857 08:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.857 08:55:06 -- common/autotest_common.sh@10 -- # set +x 00:20:49.857 08:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.857 08:55:06 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.857 08:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.857 08:55:06 -- common/autotest_common.sh@10 -- # set +x 00:20:49.857 [2024-04-26 08:55:06.867711] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.857 08:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.857 08:55:06 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:49.857 08:55:06 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:49.857 08:55:06 -- nvmf/common.sh@521 -- # config=() 00:20:49.857 08:55:06 -- nvmf/common.sh@521 -- # local subsystem config 00:20:49.857 08:55:06 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:49.857 08:55:06 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:49.857 { 00:20:49.857 "params": { 00:20:49.857 "name": "Nvme$subsystem", 00:20:49.857 "trtype": "$TEST_TRANSPORT", 00:20:49.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.857 "adrfam": "ipv4", 00:20:49.857 "trsvcid": "$NVMF_PORT", 00:20:49.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.857 "hdgst": ${hdgst:-false}, 00:20:49.857 "ddgst": ${ddgst:-false} 00:20:49.857 }, 00:20:49.857 "method": "bdev_nvme_attach_controller" 00:20:49.857 } 00:20:49.857 EOF 00:20:49.857 )") 00:20:49.857 08:55:06 -- nvmf/common.sh@543 -- # cat 00:20:49.857 08:55:06 -- nvmf/common.sh@545 -- # jq . 00:20:49.857 08:55:06 -- nvmf/common.sh@546 -- # IFS=, 00:20:49.857 08:55:06 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:49.857 "params": { 00:20:49.857 "name": "Nvme1", 00:20:49.857 "trtype": "tcp", 00:20:49.857 "traddr": "10.0.0.2", 00:20:49.857 "adrfam": "ipv4", 00:20:49.857 "trsvcid": "4420", 00:20:49.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.857 "hdgst": false, 00:20:49.857 "ddgst": false 00:20:49.857 }, 00:20:49.857 "method": "bdev_nvme_attach_controller" 00:20:49.857 }' 00:20:49.857 [2024-04-26 08:55:06.921243] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:20:49.857 [2024-04-26 08:55:06.921290] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095627 ] 00:20:49.857 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.857 [2024-04-26 08:55:06.991272] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:49.857 [2024-04-26 08:55:07.060071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.857 [2024-04-26 08:55:07.060163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.857 [2024-04-26 08:55:07.060164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.116 I/O targets: 00:20:50.116 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:50.116 00:20:50.116 00:20:50.116 CUnit - A unit testing framework for C - Version 2.1-3 00:20:50.116 http://cunit.sourceforge.net/ 00:20:50.116 00:20:50.116 00:20:50.116 Suite: bdevio tests on: Nvme1n1 00:20:50.116 Test: blockdev write read block ...passed 00:20:50.116 Test: blockdev write zeroes read block ...passed 00:20:50.375 Test: blockdev write zeroes read no split ...passed 00:20:50.375 Test: blockdev write zeroes read split ...passed 00:20:50.375 Test: blockdev write zeroes read split partial ...passed 00:20:50.375 Test: blockdev reset ...[2024-04-26 08:55:07.500258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:50.375 [2024-04-26 08:55:07.500321] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2401920 (9): Bad file descriptor 00:20:50.376 [2024-04-26 08:55:07.556198] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:50.376 passed 00:20:50.376 Test: blockdev write read 8 blocks ...passed 00:20:50.376 Test: blockdev write read size > 128k ...passed 00:20:50.376 Test: blockdev write read invalid size ...passed 00:20:50.634 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:50.634 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:50.634 Test: blockdev write read max offset ...passed 00:20:50.635 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:50.635 Test: blockdev writev readv 8 blocks ...passed 00:20:50.635 Test: blockdev writev readv 30 x 1block ...passed 00:20:50.635 Test: blockdev writev readv block ...passed 00:20:50.635 Test: blockdev writev readv size > 128k ...passed 00:20:50.635 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:50.635 Test: blockdev comparev and writev ...[2024-04-26 08:55:07.835326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:50.635 [2024-04-26 08:55:07.835357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:50.635 [2024-04-26 08:55:07.835373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:50.635 [2024-04-26 08:55:07.835384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:50.635 [2024-04-26 08:55:07.835822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:50.635 [2024-04-26 08:55:07.835834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:50.635 [2024-04-26 08:55:07.835852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:50.635 [2024-04-26 08:55:07.835862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:50.635 [2024-04-26 08:55:07.836288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:50.635 [2024-04-26 08:55:07.836302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:50.635 [2024-04-26 08:55:07.836315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:50.635 [2024-04-26 08:55:07.836326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:50.635 [2024-04-26 08:55:07.836754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:50.635 [2024-04-26 08:55:07.836768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:50.635 [2024-04-26 08:55:07.836782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:50.635 [2024-04-26 08:55:07.836793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:50.635 passed 00:20:50.894 Test: blockdev nvme passthru rw ...passed 00:20:50.894 Test: blockdev nvme passthru vendor specific ...[2024-04-26 08:55:07.920371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:50.894 [2024-04-26 08:55:07.920395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:50.894 [2024-04-26 08:55:07.920771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:50.894 [2024-04-26 08:55:07.920785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:50.894 [2024-04-26 08:55:07.921188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:50.894 [2024-04-26 08:55:07.921202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:50.894 [2024-04-26 08:55:07.921580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:50.894 [2024-04-26 08:55:07.921593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:50.894 passed 00:20:50.894 Test: blockdev nvme admin passthru ...passed 00:20:50.894 Test: blockdev copy ...passed 00:20:50.894 00:20:50.894 Run Summary: Type Total Ran Passed Failed Inactive 00:20:50.894 suites 1 1 n/a 0 0 00:20:50.894 tests 23 23 23 0 0 00:20:50.894 asserts 152 152 152 0 n/a 00:20:50.894 00:20:50.894 Elapsed time = 1.451 seconds 00:20:51.153 08:55:08 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.153 08:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:51.153 08:55:08 -- common/autotest_common.sh@10 -- # set +x 00:20:51.153 08:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:51.153 08:55:08 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:51.153 08:55:08 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:51.153 08:55:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:51.153 08:55:08 -- nvmf/common.sh@117 -- # sync 00:20:51.153 08:55:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.153 08:55:08 -- nvmf/common.sh@120 -- # set +e 00:20:51.153 08:55:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.153 08:55:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.153 rmmod nvme_tcp 00:20:51.153 rmmod nvme_fabrics 00:20:51.153 rmmod nvme_keyring 00:20:51.153 08:55:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.153 08:55:08 -- nvmf/common.sh@124 -- # set -e 00:20:51.153 08:55:08 -- nvmf/common.sh@125 -- # return 0 00:20:51.153 08:55:08 -- nvmf/common.sh@478 -- # '[' -n 2095525 ']' 00:20:51.153 08:55:08 -- nvmf/common.sh@479 -- # killprocess 2095525 00:20:51.153 08:55:08 -- common/autotest_common.sh@936 -- # '[' -z 2095525 ']' 00:20:51.153 08:55:08 -- common/autotest_common.sh@940 -- # kill -0 2095525 00:20:51.153 08:55:08 -- common/autotest_common.sh@941 -- # uname 00:20:51.153 08:55:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:51.153 08:55:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2095525 00:20:51.153 08:55:08 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:51.153 08:55:08 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:51.153 08:55:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2095525' 00:20:51.153 killing process with pid 2095525 00:20:51.153 08:55:08 -- common/autotest_common.sh@955 -- # kill 2095525 00:20:51.153 08:55:08 -- common/autotest_common.sh@960 -- # wait 2095525 00:20:51.413 08:55:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:51.413 08:55:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:51.413 08:55:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:51.413 08:55:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.413 08:55:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.413 08:55:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.413 08:55:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.413 08:55:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.951 08:55:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:53.951 00:20:53.951 real 0m11.675s 00:20:53.951 user 0m13.690s 00:20:53.951 sys 0m5.904s 00:20:53.951 08:55:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:53.951 08:55:10 -- common/autotest_common.sh@10 -- # set +x 00:20:53.951 ************************************ 00:20:53.951 END TEST nvmf_bdevio 00:20:53.951 ************************************ 00:20:53.951 08:55:10 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:20:53.951 08:55:10 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:53.951 08:55:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:20:53.951 08:55:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:53.951 08:55:10 -- common/autotest_common.sh@10 -- # set +x 00:20:53.951 ************************************ 00:20:53.951 START TEST nvmf_bdevio_no_huge 00:20:53.951 ************************************ 00:20:53.951 08:55:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:53.951 * Looking for test storage... 00:20:53.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:53.951 08:55:10 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.951 08:55:10 -- nvmf/common.sh@7 -- # uname -s 00:20:53.951 08:55:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.951 08:55:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.951 08:55:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.951 08:55:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.980 08:55:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.980 08:55:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.980 08:55:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.980 08:55:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.980 08:55:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.980 08:55:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.980 08:55:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:53.980 08:55:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:53.980 08:55:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.980 08:55:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.980 08:55:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.980 08:55:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.980 08:55:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.980 08:55:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.980 08:55:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.980 08:55:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.980 08:55:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.980 08:55:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.980 08:55:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.980 08:55:10 -- paths/export.sh@5 -- # export PATH 00:20:53.980 08:55:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.980 08:55:10 -- nvmf/common.sh@47 -- # : 0 00:20:53.981 08:55:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.981 08:55:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.981 08:55:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.981 08:55:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.981 08:55:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.981 08:55:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.981 08:55:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.981 08:55:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.981 08:55:10 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:53.981 08:55:10 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:53.981 08:55:10 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:53.981 08:55:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:53.981 08:55:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.981 08:55:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:53.981 08:55:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:53.981 08:55:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:53.981 08:55:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.981 08:55:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.981 08:55:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.981 08:55:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:53.981 08:55:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:53.981 08:55:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:53.981 08:55:10 -- common/autotest_common.sh@10 -- # set +x 00:21:00.543 08:55:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:00.543 08:55:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:00.543 08:55:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:00.543 08:55:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:00.543 08:55:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:00.543 08:55:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:00.543 08:55:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:00.543 08:55:17 -- nvmf/common.sh@295 -- # net_devs=() 00:21:00.543 08:55:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:00.543 08:55:17 -- nvmf/common.sh@296 -- # e810=() 00:21:00.543 08:55:17 -- nvmf/common.sh@296 -- # local -ga e810 00:21:00.543 08:55:17 -- nvmf/common.sh@297 -- # x722=() 00:21:00.543 08:55:17 -- nvmf/common.sh@297 -- # local -ga x722 00:21:00.543 08:55:17 -- nvmf/common.sh@298 -- # mlx=() 00:21:00.543 08:55:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:00.543 08:55:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.543 08:55:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.543 08:55:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.543 08:55:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.543 08:55:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.543 08:55:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.543 08:55:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.543 08:55:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.543 08:55:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.543 08:55:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.544 08:55:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.544 08:55:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:00.544 08:55:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:00.544 08:55:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:00.544 08:55:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.544 08:55:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:00.544 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:00.544 08:55:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.544 08:55:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:00.544 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:00.544 08:55:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:00.544 08:55:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.544 08:55:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.544 08:55:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:00.544 08:55:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.544 08:55:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:00.544 Found net devices under 0000:af:00.0: cvl_0_0 00:21:00.544 08:55:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.544 08:55:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.544 08:55:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.544 08:55:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:00.544 08:55:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.544 08:55:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:00.544 Found net devices under 0000:af:00.1: cvl_0_1 00:21:00.544 08:55:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.544 08:55:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:00.544 08:55:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:00.544 08:55:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:00.544 08:55:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.544 08:55:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.544 08:55:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.544 08:55:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:00.544 08:55:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.544 08:55:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.544 08:55:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:00.544 08:55:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.544 08:55:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.544 08:55:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:00.544 08:55:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:00.544 08:55:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.544 08:55:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.544 08:55:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.544 08:55:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.544 08:55:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:00.544 08:55:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.544 08:55:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.544 08:55:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.544 08:55:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:00.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:21:00.544 00:21:00.544 --- 10.0.0.2 ping statistics --- 00:21:00.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.544 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:21:00.544 08:55:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:21:00.544 00:21:00.544 --- 10.0.0.1 ping statistics --- 00:21:00.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.544 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:21:00.544 08:55:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.544 08:55:17 -- nvmf/common.sh@411 -- # return 0 00:21:00.544 08:55:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:00.544 08:55:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.544 08:55:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:00.544 08:55:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.544 08:55:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:00.544 08:55:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:00.544 08:55:17 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:00.544 08:55:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:00.544 08:55:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:00.544 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.803 08:55:17 -- nvmf/common.sh@470 -- # nvmfpid=2099593 00:21:00.803 08:55:17 -- nvmf/common.sh@471 -- # waitforlisten 2099593 00:21:00.803 08:55:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:00.803 08:55:17 -- common/autotest_common.sh@817 -- # '[' -z 2099593 ']' 00:21:00.803 08:55:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.803 08:55:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:00.803 08:55:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.803 08:55:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:00.803 08:55:17 -- common/autotest_common.sh@10 -- # set +x 00:21:00.803 [2024-04-26 08:55:17.850131] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:00.803 [2024-04-26 08:55:17.850182] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:00.803 [2024-04-26 08:55:17.931098] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.803 [2024-04-26 08:55:18.027460] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.803 [2024-04-26 08:55:18.027498] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.803 [2024-04-26 08:55:18.027508] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.803 [2024-04-26 08:55:18.027517] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.803 [2024-04-26 08:55:18.027524] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.803 [2024-04-26 08:55:18.027574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:00.803 [2024-04-26 08:55:18.027662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:00.803 [2024-04-26 08:55:18.028416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.803 [2024-04-26 08:55:18.028417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:01.738 08:55:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:01.738 08:55:18 -- common/autotest_common.sh@850 -- # return 0 00:21:01.738 08:55:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:01.738 08:55:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:01.738 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:21:01.738 08:55:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.738 08:55:18 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:01.738 08:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.738 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:21:01.738 [2024-04-26 08:55:18.696786] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.738 08:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.738 08:55:18 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:01.738 08:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.738 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:21:01.738 Malloc0 00:21:01.738 08:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.738 08:55:18 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:01.738 08:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.738 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:21:01.738 08:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.738 08:55:18 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:01.738 08:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.738 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:21:01.738 08:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.738 08:55:18 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.738 08:55:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.738 08:55:18 -- common/autotest_common.sh@10 -- # set +x 00:21:01.738 [2024-04-26 08:55:18.741454] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.738 08:55:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.738 08:55:18 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:01.738 08:55:18 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:01.738 08:55:18 -- nvmf/common.sh@521 -- # config=() 00:21:01.738 08:55:18 -- nvmf/common.sh@521 -- # local subsystem config 00:21:01.738 08:55:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:01.738 08:55:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:01.738 { 00:21:01.738 "params": { 00:21:01.738 "name": "Nvme$subsystem", 00:21:01.738 "trtype": "$TEST_TRANSPORT", 00:21:01.738 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.738 "adrfam": "ipv4", 00:21:01.738 "trsvcid": "$NVMF_PORT", 00:21:01.738 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.738 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.738 "hdgst": ${hdgst:-false}, 00:21:01.738 "ddgst": ${ddgst:-false} 00:21:01.738 }, 00:21:01.738 "method": "bdev_nvme_attach_controller" 00:21:01.738 } 00:21:01.738 EOF 00:21:01.738 )") 00:21:01.738 08:55:18 -- nvmf/common.sh@543 -- # cat 00:21:01.738 08:55:18 -- nvmf/common.sh@545 -- # jq . 00:21:01.738 08:55:18 -- nvmf/common.sh@546 -- # IFS=, 00:21:01.738 08:55:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:01.738 "params": { 00:21:01.738 "name": "Nvme1", 00:21:01.738 "trtype": "tcp", 00:21:01.738 "traddr": "10.0.0.2", 00:21:01.738 "adrfam": "ipv4", 00:21:01.738 "trsvcid": "4420", 00:21:01.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.738 "hdgst": false, 00:21:01.738 "ddgst": false 00:21:01.738 }, 00:21:01.738 "method": "bdev_nvme_attach_controller" 00:21:01.738 }' 00:21:01.738 [2024-04-26 08:55:18.793535] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:01.738 [2024-04-26 08:55:18.793588] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2099872 ] 00:21:01.738 [2024-04-26 08:55:18.869572] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:01.738 [2024-04-26 08:55:18.967240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.738 [2024-04-26 08:55:18.967334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.738 [2024-04-26 08:55:18.967337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.996 I/O targets: 00:21:01.996 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:01.996 00:21:01.996 00:21:01.996 CUnit - A unit testing framework for C - Version 2.1-3 00:21:01.996 http://cunit.sourceforge.net/ 00:21:01.996 00:21:01.996 00:21:01.996 Suite: bdevio tests on: Nvme1n1 00:21:01.996 Test: blockdev write read block ...passed 00:21:01.996 Test: blockdev write zeroes read block ...passed 00:21:01.996 Test: blockdev write zeroes read no split ...passed 00:21:02.254 Test: blockdev write zeroes read split ...passed 00:21:02.254 Test: blockdev write zeroes read split partial ...passed 00:21:02.254 Test: blockdev reset ...[2024-04-26 08:55:19.341957] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:02.254 [2024-04-26 08:55:19.342020] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfdc40 (9): Bad file descriptor 00:21:02.254 [2024-04-26 08:55:19.359705] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:02.254 passed 00:21:02.254 Test: blockdev write read 8 blocks ...passed 00:21:02.254 Test: blockdev write read size > 128k ...passed 00:21:02.254 Test: blockdev write read invalid size ...passed 00:21:02.254 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:02.254 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:02.254 Test: blockdev write read max offset ...passed 00:21:02.254 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:02.254 Test: blockdev writev readv 8 blocks ...passed 00:21:02.512 Test: blockdev writev readv 30 x 1block ...passed 00:21:02.512 Test: blockdev writev readv block ...passed 00:21:02.512 Test: blockdev writev readv size > 128k ...passed 00:21:02.512 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:02.512 Test: blockdev comparev and writev ...[2024-04-26 08:55:19.595298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:02.512 [2024-04-26 08:55:19.595329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:02.512 [2024-04-26 08:55:19.595346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:02.512 [2024-04-26 08:55:19.595357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:02.512 [2024-04-26 08:55:19.595854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:02.512 [2024-04-26 08:55:19.595869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:02.512 [2024-04-26 08:55:19.595883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:02.512 [2024-04-26 08:55:19.595893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:02.513 [2024-04-26 08:55:19.596327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:02.513 [2024-04-26 08:55:19.596343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:02.513 [2024-04-26 08:55:19.596358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:02.513 [2024-04-26 08:55:19.596368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:02.513 [2024-04-26 08:55:19.596804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:02.513 [2024-04-26 08:55:19.596818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:02.513 [2024-04-26 08:55:19.596833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:02.513 [2024-04-26 08:55:19.596842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:02.513 passed 00:21:02.513 Test: blockdev nvme passthru rw ...passed 00:21:02.513 Test: blockdev nvme passthru vendor specific ...[2024-04-26 08:55:19.680344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:02.513 [2024-04-26 08:55:19.680361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:02.513 [2024-04-26 08:55:19.680735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:02.513 [2024-04-26 08:55:19.680749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:02.513 [2024-04-26 08:55:19.681119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:02.513 [2024-04-26 08:55:19.681131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:02.513 [2024-04-26 08:55:19.681493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:02.513 [2024-04-26 08:55:19.681506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:02.513 passed 00:21:02.513 Test: blockdev nvme admin passthru ...passed 00:21:02.513 Test: blockdev copy ...passed 00:21:02.513 00:21:02.513 Run Summary: Type Total Ran Passed Failed Inactive 00:21:02.513 suites 1 1 n/a 0 0 00:21:02.513 tests 23 23 23 0 0 00:21:02.513 asserts 152 152 152 0 n/a 00:21:02.513 00:21:02.513 Elapsed time = 1.218 seconds 00:21:03.080 08:55:20 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.080 08:55:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:03.080 08:55:20 -- common/autotest_common.sh@10 -- # set +x 00:21:03.080 08:55:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:03.080 08:55:20 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:03.080 08:55:20 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:03.080 08:55:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:03.080 08:55:20 -- nvmf/common.sh@117 -- # sync 00:21:03.080 08:55:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:03.080 08:55:20 -- nvmf/common.sh@120 -- # set +e 00:21:03.080 08:55:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:03.080 08:55:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:03.080 rmmod nvme_tcp 00:21:03.080 rmmod nvme_fabrics 00:21:03.080 rmmod nvme_keyring 00:21:03.080 08:55:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:03.080 08:55:20 -- nvmf/common.sh@124 -- # set -e 00:21:03.080 08:55:20 -- nvmf/common.sh@125 -- # return 0 00:21:03.080 08:55:20 -- nvmf/common.sh@478 -- # '[' -n 2099593 ']' 00:21:03.080 08:55:20 -- nvmf/common.sh@479 -- # killprocess 2099593 00:21:03.080 08:55:20 -- common/autotest_common.sh@936 -- # '[' -z 2099593 ']' 00:21:03.080 08:55:20 -- common/autotest_common.sh@940 -- # kill -0 2099593 00:21:03.080 08:55:20 -- common/autotest_common.sh@941 -- # uname 00:21:03.080 08:55:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:03.080 08:55:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2099593 00:21:03.080 08:55:20 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:21:03.080 08:55:20 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:21:03.080 08:55:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2099593' 00:21:03.080 killing process with pid 2099593 00:21:03.080 08:55:20 -- common/autotest_common.sh@955 -- # kill 2099593 00:21:03.080 08:55:20 -- common/autotest_common.sh@960 -- # wait 2099593 00:21:03.373 08:55:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:03.373 08:55:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:03.373 08:55:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:03.373 08:55:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.373 08:55:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:03.373 08:55:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.373 08:55:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.373 08:55:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.910 08:55:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:05.910 00:21:05.910 real 0m11.849s 00:21:05.910 user 0m13.753s 00:21:05.910 sys 0m6.405s 00:21:05.910 08:55:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:05.910 08:55:22 -- common/autotest_common.sh@10 -- # set +x 00:21:05.910 ************************************ 00:21:05.910 END TEST nvmf_bdevio_no_huge 00:21:05.910 ************************************ 00:21:05.910 08:55:22 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:05.910 08:55:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:05.910 08:55:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.910 08:55:22 -- common/autotest_common.sh@10 -- # set +x 00:21:05.910 ************************************ 00:21:05.910 START TEST nvmf_tls 00:21:05.910 ************************************ 00:21:05.910 08:55:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:05.910 * Looking for test storage... 00:21:05.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:05.910 08:55:22 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.910 08:55:22 -- nvmf/common.sh@7 -- # uname -s 00:21:05.910 08:55:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.910 08:55:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.910 08:55:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.910 08:55:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.910 08:55:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.910 08:55:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.910 08:55:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.910 08:55:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.910 08:55:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.910 08:55:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.910 08:55:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:05.910 08:55:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:05.910 08:55:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.910 08:55:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.910 08:55:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.910 08:55:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.910 08:55:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.910 08:55:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.910 08:55:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.910 08:55:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.910 08:55:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.910 08:55:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.910 08:55:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.910 08:55:23 -- paths/export.sh@5 -- # export PATH 00:21:05.910 08:55:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.910 08:55:23 -- nvmf/common.sh@47 -- # : 0 00:21:05.910 08:55:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.910 08:55:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.910 08:55:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.911 08:55:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.911 08:55:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.911 08:55:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.911 08:55:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.911 08:55:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.911 08:55:23 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:05.911 08:55:23 -- target/tls.sh@62 -- # nvmftestinit 00:21:05.911 08:55:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:05.911 08:55:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.911 08:55:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:05.911 08:55:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:05.911 08:55:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:05.911 08:55:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.911 08:55:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.911 08:55:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.911 08:55:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:05.911 08:55:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:05.911 08:55:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.911 08:55:23 -- common/autotest_common.sh@10 -- # set +x 00:21:14.024 08:55:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:14.024 08:55:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:14.024 08:55:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:14.024 08:55:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:14.024 08:55:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:14.024 08:55:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:14.024 08:55:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:14.024 08:55:29 -- nvmf/common.sh@295 -- # net_devs=() 00:21:14.024 08:55:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:14.024 08:55:29 -- nvmf/common.sh@296 -- # e810=() 00:21:14.024 08:55:29 -- nvmf/common.sh@296 -- # local -ga e810 00:21:14.024 08:55:29 -- nvmf/common.sh@297 -- # x722=() 00:21:14.024 08:55:29 -- nvmf/common.sh@297 -- # local -ga x722 00:21:14.024 08:55:29 -- nvmf/common.sh@298 -- # mlx=() 00:21:14.024 08:55:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:14.024 08:55:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.024 08:55:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.024 08:55:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.024 08:55:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.024 08:55:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.024 08:55:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.024 08:55:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.024 08:55:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.024 08:55:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.024 08:55:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.024 08:55:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.024 08:55:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:14.024 08:55:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:14.024 08:55:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:14.024 08:55:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:14.024 08:55:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:14.024 08:55:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:14.024 08:55:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.024 08:55:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:14.024 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:14.024 08:55:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.024 08:55:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.025 08:55:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:14.025 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:14.025 08:55:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:14.025 08:55:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.025 08:55:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.025 08:55:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:14.025 08:55:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.025 08:55:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:14.025 Found net devices under 0000:af:00.0: cvl_0_0 00:21:14.025 08:55:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.025 08:55:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.025 08:55:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.025 08:55:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:14.025 08:55:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.025 08:55:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:14.025 Found net devices under 0000:af:00.1: cvl_0_1 00:21:14.025 08:55:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.025 08:55:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:14.025 08:55:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:14.025 08:55:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:14.025 08:55:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:14.025 08:55:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.025 08:55:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.025 08:55:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.025 08:55:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:14.025 08:55:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.025 08:55:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.025 08:55:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:14.025 08:55:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.025 08:55:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.025 08:55:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:14.025 08:55:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:14.025 08:55:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.025 08:55:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.025 08:55:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.025 08:55:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.025 08:55:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:14.025 08:55:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.025 08:55:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.025 08:55:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.025 08:55:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:14.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:21:14.025 00:21:14.025 --- 10.0.0.2 ping statistics --- 00:21:14.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.025 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:21:14.025 08:55:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:21:14.025 00:21:14.025 --- 10.0.0.1 ping statistics --- 00:21:14.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.025 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:21:14.025 08:55:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.025 08:55:30 -- nvmf/common.sh@411 -- # return 0 00:21:14.025 08:55:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:14.025 08:55:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.025 08:55:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:14.025 08:55:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:14.025 08:55:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.025 08:55:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:14.025 08:55:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:14.025 08:55:30 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:14.025 08:55:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:14.025 08:55:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:14.025 08:55:30 -- common/autotest_common.sh@10 -- # set +x 00:21:14.025 08:55:30 -- nvmf/common.sh@470 -- # nvmfpid=2103843 00:21:14.025 08:55:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:14.025 08:55:30 -- nvmf/common.sh@471 -- # waitforlisten 2103843 00:21:14.025 08:55:30 -- common/autotest_common.sh@817 -- # '[' -z 2103843 ']' 00:21:14.025 08:55:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.025 08:55:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:14.025 08:55:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.025 08:55:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:14.025 08:55:30 -- common/autotest_common.sh@10 -- # set +x 00:21:14.025 [2024-04-26 08:55:30.155916] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:14.025 [2024-04-26 08:55:30.155961] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.025 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.025 [2024-04-26 08:55:30.231702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.025 [2024-04-26 08:55:30.308008] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.025 [2024-04-26 08:55:30.308046] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.025 [2024-04-26 08:55:30.308056] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.025 [2024-04-26 08:55:30.308065] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.025 [2024-04-26 08:55:30.308073] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.025 [2024-04-26 08:55:30.308107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.025 08:55:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:14.025 08:55:30 -- common/autotest_common.sh@850 -- # return 0 00:21:14.025 08:55:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:14.025 08:55:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:14.025 08:55:30 -- common/autotest_common.sh@10 -- # set +x 00:21:14.025 08:55:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.025 08:55:30 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:14.025 08:55:30 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:14.025 true 00:21:14.025 08:55:31 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.025 08:55:31 -- target/tls.sh@73 -- # jq -r .tls_version 00:21:14.284 08:55:31 -- target/tls.sh@73 -- # version=0 00:21:14.284 08:55:31 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:14.284 08:55:31 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:14.284 08:55:31 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.284 08:55:31 -- target/tls.sh@81 -- # jq -r .tls_version 00:21:14.542 08:55:31 -- target/tls.sh@81 -- # version=13 00:21:14.542 08:55:31 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:14.542 08:55:31 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:14.799 08:55:31 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.799 08:55:31 -- target/tls.sh@89 -- # jq -r .tls_version 00:21:14.799 08:55:32 -- target/tls.sh@89 -- # version=7 00:21:14.799 08:55:32 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:14.799 08:55:32 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:14.799 08:55:32 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:15.056 08:55:32 -- target/tls.sh@96 -- # ktls=false 00:21:15.056 08:55:32 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:15.056 08:55:32 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:15.314 08:55:32 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:15.314 08:55:32 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:15.314 08:55:32 -- target/tls.sh@104 -- # ktls=true 00:21:15.314 08:55:32 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:15.314 08:55:32 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:15.571 08:55:32 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:15.571 08:55:32 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:15.830 08:55:32 -- target/tls.sh@112 -- # ktls=false 00:21:15.830 08:55:32 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:15.830 08:55:32 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:15.830 08:55:32 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:15.830 08:55:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:15.830 08:55:32 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:15.830 08:55:32 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:21:15.830 08:55:32 -- nvmf/common.sh@693 -- # digest=1 00:21:15.830 08:55:32 -- nvmf/common.sh@694 -- # python - 00:21:15.830 08:55:32 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:15.830 08:55:32 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:15.830 08:55:32 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:15.830 08:55:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:15.830 08:55:32 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:15.830 08:55:32 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:21:15.830 08:55:32 -- nvmf/common.sh@693 -- # digest=1 00:21:15.830 08:55:32 -- nvmf/common.sh@694 -- # python - 00:21:15.830 08:55:32 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:15.830 08:55:32 -- target/tls.sh@121 -- # mktemp 00:21:15.830 08:55:32 -- target/tls.sh@121 -- # key_path=/tmp/tmp.23YZopw52r 00:21:15.830 08:55:32 -- target/tls.sh@122 -- # mktemp 00:21:15.830 08:55:32 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.4baRui9IGG 00:21:15.830 08:55:32 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:15.830 08:55:32 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:15.830 08:55:32 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.23YZopw52r 00:21:15.830 08:55:32 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.4baRui9IGG 00:21:15.830 08:55:32 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:16.089 08:55:33 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:16.347 08:55:33 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.23YZopw52r 00:21:16.347 08:55:33 -- target/tls.sh@49 -- # local key=/tmp/tmp.23YZopw52r 00:21:16.347 08:55:33 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:16.347 [2024-04-26 08:55:33.528956] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.347 08:55:33 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:16.605 08:55:33 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:16.863 [2024-04-26 08:55:33.865818] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.863 [2024-04-26 08:55:33.866037] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.863 08:55:33 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:16.863 malloc0 00:21:16.863 08:55:34 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:17.121 08:55:34 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.23YZopw52r 00:21:17.121 [2024-04-26 08:55:34.347228] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:17.121 08:55:34 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.23YZopw52r 00:21:17.378 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.378 Initializing NVMe Controllers 00:21:27.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:27.378 Initialization complete. Launching workers. 00:21:27.378 ======================================================== 00:21:27.378 Latency(us) 00:21:27.378 Device Information : IOPS MiB/s Average min max 00:21:27.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16119.47 62.97 3970.83 760.35 5435.76 00:21:27.378 ======================================================== 00:21:27.378 Total : 16119.47 62.97 3970.83 760.35 5435.76 00:21:27.378 00:21:27.378 08:55:44 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.23YZopw52r 00:21:27.378 08:55:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:27.378 08:55:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:27.378 08:55:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:27.378 08:55:44 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.23YZopw52r' 00:21:27.378 08:55:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:27.378 08:55:44 -- target/tls.sh@28 -- # bdevperf_pid=2106293 00:21:27.378 08:55:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:27.378 08:55:44 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:27.378 08:55:44 -- target/tls.sh@31 -- # waitforlisten 2106293 /var/tmp/bdevperf.sock 00:21:27.378 08:55:44 -- common/autotest_common.sh@817 -- # '[' -z 2106293 ']' 00:21:27.378 08:55:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.378 08:55:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:27.378 08:55:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.378 08:55:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:27.378 08:55:44 -- common/autotest_common.sh@10 -- # set +x 00:21:27.378 [2024-04-26 08:55:44.519930] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:27.378 [2024-04-26 08:55:44.519990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106293 ] 00:21:27.378 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.378 [2024-04-26 08:55:44.586355] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.636 [2024-04-26 08:55:44.659873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.202 08:55:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:28.202 08:55:45 -- common/autotest_common.sh@850 -- # return 0 00:21:28.202 08:55:45 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.23YZopw52r 00:21:28.461 [2024-04-26 08:55:45.482632] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:28.461 [2024-04-26 08:55:45.482712] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:28.461 TLSTESTn1 00:21:28.461 08:55:45 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:28.461 Running I/O for 10 seconds... 00:21:40.665 00:21:40.665 Latency(us) 00:21:40.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.665 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:40.665 Verification LBA range: start 0x0 length 0x2000 00:21:40.665 TLSTESTn1 : 10.07 1392.44 5.44 0.00 0.00 91659.10 6868.17 184549.38 00:21:40.665 =================================================================================================================== 00:21:40.665 Total : 1392.44 5.44 0.00 0.00 91659.10 6868.17 184549.38 00:21:40.665 0 00:21:40.665 08:55:55 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.665 08:55:55 -- target/tls.sh@45 -- # killprocess 2106293 00:21:40.665 08:55:55 -- common/autotest_common.sh@936 -- # '[' -z 2106293 ']' 00:21:40.665 08:55:55 -- common/autotest_common.sh@940 -- # kill -0 2106293 00:21:40.665 08:55:55 -- common/autotest_common.sh@941 -- # uname 00:21:40.665 08:55:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:40.665 08:55:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2106293 00:21:40.665 08:55:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:40.665 08:55:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:40.665 08:55:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2106293' 00:21:40.665 killing process with pid 2106293 00:21:40.665 08:55:55 -- common/autotest_common.sh@955 -- # kill 2106293 00:21:40.665 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.665 00:21:40.665 Latency(us) 00:21:40.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.665 =================================================================================================================== 00:21:40.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.665 [2024-04-26 08:55:55.859261] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:40.665 08:55:55 -- common/autotest_common.sh@960 -- # wait 2106293 00:21:40.665 08:55:56 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4baRui9IGG 00:21:40.665 08:55:56 -- common/autotest_common.sh@638 -- # local es=0 00:21:40.665 08:55:56 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4baRui9IGG 00:21:40.665 08:55:56 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:40.665 08:55:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:40.665 08:55:56 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:40.665 08:55:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:40.665 08:55:56 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4baRui9IGG 00:21:40.665 08:55:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:40.665 08:55:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:40.665 08:55:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:40.665 08:55:56 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4baRui9IGG' 00:21:40.665 08:55:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.665 08:55:56 -- target/tls.sh@28 -- # bdevperf_pid=2108268 00:21:40.665 08:55:56 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:40.665 08:55:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:40.665 08:55:56 -- target/tls.sh@31 -- # waitforlisten 2108268 /var/tmp/bdevperf.sock 00:21:40.665 08:55:56 -- common/autotest_common.sh@817 -- # '[' -z 2108268 ']' 00:21:40.665 08:55:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.665 08:55:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:40.665 08:55:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.665 08:55:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:40.665 08:55:56 -- common/autotest_common.sh@10 -- # set +x 00:21:40.665 [2024-04-26 08:55:56.111892] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:40.665 [2024-04-26 08:55:56.111946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108268 ] 00:21:40.665 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.665 [2024-04-26 08:55:56.179113] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.665 [2024-04-26 08:55:56.252437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.665 08:55:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:40.665 08:55:56 -- common/autotest_common.sh@850 -- # return 0 00:21:40.665 08:55:56 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4baRui9IGG 00:21:40.665 [2024-04-26 08:55:57.054763] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.665 [2024-04-26 08:55:57.054838] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:40.665 [2024-04-26 08:55:57.059538] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:40.665 [2024-04-26 08:55:57.060166] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1444750 (107): Transport endpoint is not connected 00:21:40.665 [2024-04-26 08:55:57.061157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1444750 (9): Bad file descriptor 00:21:40.665 [2024-04-26 08:55:57.062159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:40.665 [2024-04-26 08:55:57.062171] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:40.665 [2024-04-26 08:55:57.062179] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:40.665 request: 00:21:40.665 { 00:21:40.665 "name": "TLSTEST", 00:21:40.665 "trtype": "tcp", 00:21:40.665 "traddr": "10.0.0.2", 00:21:40.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.665 "adrfam": "ipv4", 00:21:40.665 "trsvcid": "4420", 00:21:40.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.665 "psk": "/tmp/tmp.4baRui9IGG", 00:21:40.665 "method": "bdev_nvme_attach_controller", 00:21:40.665 "req_id": 1 00:21:40.665 } 00:21:40.665 Got JSON-RPC error response 00:21:40.665 response: 00:21:40.665 { 00:21:40.665 "code": -32602, 00:21:40.665 "message": "Invalid parameters" 00:21:40.665 } 00:21:40.665 08:55:57 -- target/tls.sh@36 -- # killprocess 2108268 00:21:40.665 08:55:57 -- common/autotest_common.sh@936 -- # '[' -z 2108268 ']' 00:21:40.665 08:55:57 -- common/autotest_common.sh@940 -- # kill -0 2108268 00:21:40.665 08:55:57 -- common/autotest_common.sh@941 -- # uname 00:21:40.665 08:55:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:40.665 08:55:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2108268 00:21:40.665 08:55:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:40.666 08:55:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:40.666 08:55:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2108268' 00:21:40.666 killing process with pid 2108268 00:21:40.666 08:55:57 -- common/autotest_common.sh@955 -- # kill 2108268 00:21:40.666 Received shutdown signal, test time was about 10.000000 seconds 00:21:40.666 00:21:40.666 Latency(us) 00:21:40.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.666 =================================================================================================================== 00:21:40.666 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:40.666 [2024-04-26 08:55:57.132956] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:40.666 08:55:57 -- common/autotest_common.sh@960 -- # wait 2108268 00:21:40.666 08:55:57 -- target/tls.sh@37 -- # return 1 00:21:40.666 08:55:57 -- common/autotest_common.sh@641 -- # es=1 00:21:40.666 08:55:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:40.666 08:55:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:40.666 08:55:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:40.666 08:55:57 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.23YZopw52r 00:21:40.666 08:55:57 -- common/autotest_common.sh@638 -- # local es=0 00:21:40.666 08:55:57 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.23YZopw52r 00:21:40.666 08:55:57 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:40.666 08:55:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:40.666 08:55:57 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:40.666 08:55:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:40.666 08:55:57 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.23YZopw52r 00:21:40.666 08:55:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:40.666 08:55:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:40.666 08:55:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:40.666 08:55:57 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.23YZopw52r' 00:21:40.666 08:55:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:40.666 08:55:57 -- target/tls.sh@28 -- # bdevperf_pid=2108437 00:21:40.666 08:55:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:40.666 08:55:57 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:40.666 08:55:57 -- target/tls.sh@31 -- # waitforlisten 2108437 /var/tmp/bdevperf.sock 00:21:40.666 08:55:57 -- common/autotest_common.sh@817 -- # '[' -z 2108437 ']' 00:21:40.666 08:55:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.666 08:55:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:40.666 08:55:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.666 08:55:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:40.666 08:55:57 -- common/autotest_common.sh@10 -- # set +x 00:21:40.666 [2024-04-26 08:55:57.378455] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:40.666 [2024-04-26 08:55:57.378511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108437 ] 00:21:40.666 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.666 [2024-04-26 08:55:57.445610] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.666 [2024-04-26 08:55:57.508079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.234 08:55:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:41.234 08:55:58 -- common/autotest_common.sh@850 -- # return 0 00:21:41.234 08:55:58 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.23YZopw52r 00:21:41.234 [2024-04-26 08:55:58.326402] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.234 [2024-04-26 08:55:58.326502] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:41.234 [2024-04-26 08:55:58.331309] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:41.234 [2024-04-26 08:55:58.331334] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:41.234 [2024-04-26 08:55:58.331360] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:41.234 [2024-04-26 08:55:58.332006] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd9750 (107): Transport endpoint is not connected 00:21:41.234 [2024-04-26 08:55:58.332996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd9750 (9): Bad file descriptor 00:21:41.234 [2024-04-26 08:55:58.333997] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:41.234 [2024-04-26 08:55:58.334010] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:41.234 [2024-04-26 08:55:58.334019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.234 request: 00:21:41.234 { 00:21:41.234 "name": "TLSTEST", 00:21:41.234 "trtype": "tcp", 00:21:41.234 "traddr": "10.0.0.2", 00:21:41.234 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:41.234 "adrfam": "ipv4", 00:21:41.234 "trsvcid": "4420", 00:21:41.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.234 "psk": "/tmp/tmp.23YZopw52r", 00:21:41.234 "method": "bdev_nvme_attach_controller", 00:21:41.234 "req_id": 1 00:21:41.234 } 00:21:41.234 Got JSON-RPC error response 00:21:41.234 response: 00:21:41.234 { 00:21:41.234 "code": -32602, 00:21:41.234 "message": "Invalid parameters" 00:21:41.234 } 00:21:41.234 08:55:58 -- target/tls.sh@36 -- # killprocess 2108437 00:21:41.234 08:55:58 -- common/autotest_common.sh@936 -- # '[' -z 2108437 ']' 00:21:41.234 08:55:58 -- common/autotest_common.sh@940 -- # kill -0 2108437 00:21:41.234 08:55:58 -- common/autotest_common.sh@941 -- # uname 00:21:41.234 08:55:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:41.235 08:55:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2108437 00:21:41.235 08:55:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:41.235 08:55:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:41.235 08:55:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2108437' 00:21:41.235 killing process with pid 2108437 00:21:41.235 08:55:58 -- common/autotest_common.sh@955 -- # kill 2108437 00:21:41.235 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.235 00:21:41.235 Latency(us) 00:21:41.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.235 =================================================================================================================== 00:21:41.235 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:41.235 [2024-04-26 08:55:58.404116] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:41.235 08:55:58 -- common/autotest_common.sh@960 -- # wait 2108437 00:21:41.494 08:55:58 -- target/tls.sh@37 -- # return 1 00:21:41.494 08:55:58 -- common/autotest_common.sh@641 -- # es=1 00:21:41.494 08:55:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:41.494 08:55:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:41.494 08:55:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:41.494 08:55:58 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.23YZopw52r 00:21:41.494 08:55:58 -- common/autotest_common.sh@638 -- # local es=0 00:21:41.494 08:55:58 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.23YZopw52r 00:21:41.494 08:55:58 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:41.494 08:55:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:41.494 08:55:58 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:41.494 08:55:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:41.494 08:55:58 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.23YZopw52r 00:21:41.494 08:55:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:41.494 08:55:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:41.494 08:55:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:41.494 08:55:58 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.23YZopw52r' 00:21:41.494 08:55:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.494 08:55:58 -- target/tls.sh@28 -- # bdevperf_pid=2108708 00:21:41.494 08:55:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:41.494 08:55:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:41.494 08:55:58 -- target/tls.sh@31 -- # waitforlisten 2108708 /var/tmp/bdevperf.sock 00:21:41.494 08:55:58 -- common/autotest_common.sh@817 -- # '[' -z 2108708 ']' 00:21:41.494 08:55:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.494 08:55:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:41.494 08:55:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.494 08:55:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:41.494 08:55:58 -- common/autotest_common.sh@10 -- # set +x 00:21:41.494 [2024-04-26 08:55:58.644799] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:41.494 [2024-04-26 08:55:58.644853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108708 ] 00:21:41.494 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.494 [2024-04-26 08:55:58.710343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.753 [2024-04-26 08:55:58.772291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.320 08:55:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:42.320 08:55:59 -- common/autotest_common.sh@850 -- # return 0 00:21:42.320 08:55:59 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.23YZopw52r 00:21:42.592 [2024-04-26 08:55:59.597803] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.592 [2024-04-26 08:55:59.597883] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:42.592 [2024-04-26 08:55:59.604175] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:42.592 [2024-04-26 08:55:59.604200] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:42.592 [2024-04-26 08:55:59.604227] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:42.592 [2024-04-26 08:55:59.604455] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x668750 (107): Transport endpoint is not connected 00:21:42.592 [2024-04-26 08:55:59.605276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x668750 (9): Bad file descriptor 00:21:42.592 [2024-04-26 08:55:59.606276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:42.592 [2024-04-26 08:55:59.606289] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:42.592 [2024-04-26 08:55:59.606298] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:42.592 request: 00:21:42.592 { 00:21:42.592 "name": "TLSTEST", 00:21:42.592 "trtype": "tcp", 00:21:42.592 "traddr": "10.0.0.2", 00:21:42.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.592 "adrfam": "ipv4", 00:21:42.592 "trsvcid": "4420", 00:21:42.592 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:42.592 "psk": "/tmp/tmp.23YZopw52r", 00:21:42.592 "method": "bdev_nvme_attach_controller", 00:21:42.592 "req_id": 1 00:21:42.592 } 00:21:42.592 Got JSON-RPC error response 00:21:42.592 response: 00:21:42.592 { 00:21:42.592 "code": -32602, 00:21:42.592 "message": "Invalid parameters" 00:21:42.592 } 00:21:42.592 08:55:59 -- target/tls.sh@36 -- # killprocess 2108708 00:21:42.592 08:55:59 -- common/autotest_common.sh@936 -- # '[' -z 2108708 ']' 00:21:42.592 08:55:59 -- common/autotest_common.sh@940 -- # kill -0 2108708 00:21:42.592 08:55:59 -- common/autotest_common.sh@941 -- # uname 00:21:42.592 08:55:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:42.592 08:55:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2108708 00:21:42.592 08:55:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:42.592 08:55:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:42.592 08:55:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2108708' 00:21:42.592 killing process with pid 2108708 00:21:42.592 08:55:59 -- common/autotest_common.sh@955 -- # kill 2108708 00:21:42.592 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.592 00:21:42.592 Latency(us) 00:21:42.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.592 =================================================================================================================== 00:21:42.592 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:42.592 [2024-04-26 08:55:59.685894] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:42.592 08:55:59 -- common/autotest_common.sh@960 -- # wait 2108708 00:21:42.851 08:55:59 -- target/tls.sh@37 -- # return 1 00:21:42.851 08:55:59 -- common/autotest_common.sh@641 -- # es=1 00:21:42.851 08:55:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:42.851 08:55:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:42.851 08:55:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:42.851 08:55:59 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:42.851 08:55:59 -- common/autotest_common.sh@638 -- # local es=0 00:21:42.851 08:55:59 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:42.851 08:55:59 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:42.851 08:55:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:42.851 08:55:59 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:42.851 08:55:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:42.851 08:55:59 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:42.851 08:55:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:42.851 08:55:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:42.851 08:55:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:42.851 08:55:59 -- target/tls.sh@23 -- # psk= 00:21:42.851 08:55:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:42.851 08:55:59 -- target/tls.sh@28 -- # bdevperf_pid=2108980 00:21:42.851 08:55:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:42.851 08:55:59 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:42.851 08:55:59 -- target/tls.sh@31 -- # waitforlisten 2108980 /var/tmp/bdevperf.sock 00:21:42.851 08:55:59 -- common/autotest_common.sh@817 -- # '[' -z 2108980 ']' 00:21:42.851 08:55:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.851 08:55:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:42.851 08:55:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.851 08:55:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:42.851 08:55:59 -- common/autotest_common.sh@10 -- # set +x 00:21:42.851 [2024-04-26 08:55:59.928429] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:42.851 [2024-04-26 08:55:59.928485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108980 ] 00:21:42.851 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.851 [2024-04-26 08:55:59.995324] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.851 [2024-04-26 08:56:00.071602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.787 08:56:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:43.787 08:56:00 -- common/autotest_common.sh@850 -- # return 0 00:21:43.787 08:56:00 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:43.787 [2024-04-26 08:56:00.897503] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:43.788 [2024-04-26 08:56:00.899928] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa41e30 (9): Bad file descriptor 00:21:43.788 [2024-04-26 08:56:00.900927] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:43.788 [2024-04-26 08:56:00.900940] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:43.788 [2024-04-26 08:56:00.900950] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.788 request: 00:21:43.788 { 00:21:43.788 "name": "TLSTEST", 00:21:43.788 "trtype": "tcp", 00:21:43.788 "traddr": "10.0.0.2", 00:21:43.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.788 "adrfam": "ipv4", 00:21:43.788 "trsvcid": "4420", 00:21:43.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.788 "method": "bdev_nvme_attach_controller", 00:21:43.788 "req_id": 1 00:21:43.788 } 00:21:43.788 Got JSON-RPC error response 00:21:43.788 response: 00:21:43.788 { 00:21:43.788 "code": -32602, 00:21:43.788 "message": "Invalid parameters" 00:21:43.788 } 00:21:43.788 08:56:00 -- target/tls.sh@36 -- # killprocess 2108980 00:21:43.788 08:56:00 -- common/autotest_common.sh@936 -- # '[' -z 2108980 ']' 00:21:43.788 08:56:00 -- common/autotest_common.sh@940 -- # kill -0 2108980 00:21:43.788 08:56:00 -- common/autotest_common.sh@941 -- # uname 00:21:43.788 08:56:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:43.788 08:56:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2108980 00:21:43.788 08:56:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:43.788 08:56:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:43.788 08:56:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2108980' 00:21:43.788 killing process with pid 2108980 00:21:43.788 08:56:00 -- common/autotest_common.sh@955 -- # kill 2108980 00:21:43.788 Received shutdown signal, test time was about 10.000000 seconds 00:21:43.788 00:21:43.788 Latency(us) 00:21:43.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.788 =================================================================================================================== 00:21:43.788 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:43.788 08:56:00 -- common/autotest_common.sh@960 -- # wait 2108980 00:21:44.063 08:56:01 -- target/tls.sh@37 -- # return 1 00:21:44.063 08:56:01 -- common/autotest_common.sh@641 -- # es=1 00:21:44.063 08:56:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:44.063 08:56:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:44.063 08:56:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:44.063 08:56:01 -- target/tls.sh@158 -- # killprocess 2103843 00:21:44.063 08:56:01 -- common/autotest_common.sh@936 -- # '[' -z 2103843 ']' 00:21:44.063 08:56:01 -- common/autotest_common.sh@940 -- # kill -0 2103843 00:21:44.063 08:56:01 -- common/autotest_common.sh@941 -- # uname 00:21:44.063 08:56:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:44.063 08:56:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2103843 00:21:44.063 08:56:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:44.063 08:56:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:44.063 08:56:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2103843' 00:21:44.063 killing process with pid 2103843 00:21:44.063 08:56:01 -- common/autotest_common.sh@955 -- # kill 2103843 00:21:44.063 [2024-04-26 08:56:01.226058] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:44.063 08:56:01 -- common/autotest_common.sh@960 -- # wait 2103843 00:21:44.343 08:56:01 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:44.343 08:56:01 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:44.343 08:56:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:44.343 08:56:01 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:44.343 08:56:01 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:44.343 08:56:01 -- nvmf/common.sh@693 -- # digest=2 00:21:44.343 08:56:01 -- nvmf/common.sh@694 -- # python - 00:21:44.343 08:56:01 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:44.343 08:56:01 -- target/tls.sh@160 -- # mktemp 00:21:44.343 08:56:01 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.xOzX8qhDS4 00:21:44.343 08:56:01 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:44.343 08:56:01 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.xOzX8qhDS4 00:21:44.343 08:56:01 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:44.343 08:56:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:44.343 08:56:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:44.343 08:56:01 -- common/autotest_common.sh@10 -- # set +x 00:21:44.343 08:56:01 -- nvmf/common.sh@470 -- # nvmfpid=2109268 00:21:44.343 08:56:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:44.343 08:56:01 -- nvmf/common.sh@471 -- # waitforlisten 2109268 00:21:44.343 08:56:01 -- common/autotest_common.sh@817 -- # '[' -z 2109268 ']' 00:21:44.343 08:56:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.343 08:56:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:44.343 08:56:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.343 08:56:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:44.343 08:56:01 -- common/autotest_common.sh@10 -- # set +x 00:21:44.343 [2024-04-26 08:56:01.550332] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:44.343 [2024-04-26 08:56:01.550382] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.343 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.603 [2024-04-26 08:56:01.621964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.604 [2024-04-26 08:56:01.683127] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.604 [2024-04-26 08:56:01.683169] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.604 [2024-04-26 08:56:01.683179] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.604 [2024-04-26 08:56:01.683187] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.604 [2024-04-26 08:56:01.683194] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.604 [2024-04-26 08:56:01.683219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.171 08:56:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:45.171 08:56:02 -- common/autotest_common.sh@850 -- # return 0 00:21:45.171 08:56:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:45.171 08:56:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:45.171 08:56:02 -- common/autotest_common.sh@10 -- # set +x 00:21:45.171 08:56:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.171 08:56:02 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.xOzX8qhDS4 00:21:45.171 08:56:02 -- target/tls.sh@49 -- # local key=/tmp/tmp.xOzX8qhDS4 00:21:45.171 08:56:02 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:45.430 [2024-04-26 08:56:02.529698] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.430 08:56:02 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:45.688 08:56:02 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:45.688 [2024-04-26 08:56:02.862540] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:45.688 [2024-04-26 08:56:02.862741] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.688 08:56:02 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:45.947 malloc0 00:21:45.947 08:56:03 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:45.947 08:56:03 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOzX8qhDS4 00:21:46.207 [2024-04-26 08:56:03.339908] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:46.207 08:56:03 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOzX8qhDS4 00:21:46.207 08:56:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:46.207 08:56:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:46.207 08:56:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:46.207 08:56:03 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xOzX8qhDS4' 00:21:46.207 08:56:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.207 08:56:03 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.207 08:56:03 -- target/tls.sh@28 -- # bdevperf_pid=2109560 00:21:46.207 08:56:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.207 08:56:03 -- target/tls.sh@31 -- # waitforlisten 2109560 /var/tmp/bdevperf.sock 00:21:46.207 08:56:03 -- common/autotest_common.sh@817 -- # '[' -z 2109560 ']' 00:21:46.207 08:56:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.207 08:56:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:46.207 08:56:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.207 08:56:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:46.207 08:56:03 -- common/autotest_common.sh@10 -- # set +x 00:21:46.207 [2024-04-26 08:56:03.405771] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:46.207 [2024-04-26 08:56:03.405821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2109560 ] 00:21:46.207 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.467 [2024-04-26 08:56:03.471337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.467 [2024-04-26 08:56:03.542568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.036 08:56:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:47.036 08:56:04 -- common/autotest_common.sh@850 -- # return 0 00:21:47.036 08:56:04 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOzX8qhDS4 00:21:47.296 [2024-04-26 08:56:04.353113] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.296 [2024-04-26 08:56:04.353187] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:47.296 TLSTESTn1 00:21:47.296 08:56:04 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:47.555 Running I/O for 10 seconds... 00:21:57.539 00:21:57.539 Latency(us) 00:21:57.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.539 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:57.539 Verification LBA range: start 0x0 length 0x2000 00:21:57.539 TLSTESTn1 : 10.08 1420.67 5.55 0.00 0.00 89827.70 6186.60 130862.28 00:21:57.539 =================================================================================================================== 00:21:57.539 Total : 1420.67 5.55 0.00 0.00 89827.70 6186.60 130862.28 00:21:57.539 0 00:21:57.539 08:56:14 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.539 08:56:14 -- target/tls.sh@45 -- # killprocess 2109560 00:21:57.539 08:56:14 -- common/autotest_common.sh@936 -- # '[' -z 2109560 ']' 00:21:57.539 08:56:14 -- common/autotest_common.sh@940 -- # kill -0 2109560 00:21:57.539 08:56:14 -- common/autotest_common.sh@941 -- # uname 00:21:57.539 08:56:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:57.539 08:56:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2109560 00:21:57.539 08:56:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:57.539 08:56:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:57.539 08:56:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2109560' 00:21:57.539 killing process with pid 2109560 00:21:57.539 08:56:14 -- common/autotest_common.sh@955 -- # kill 2109560 00:21:57.539 Received shutdown signal, test time was about 10.000000 seconds 00:21:57.539 00:21:57.539 Latency(us) 00:21:57.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.539 =================================================================================================================== 00:21:57.539 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.539 [2024-04-26 08:56:14.713029] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:57.539 08:56:14 -- common/autotest_common.sh@960 -- # wait 2109560 00:21:57.799 08:56:14 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.xOzX8qhDS4 00:21:57.799 08:56:14 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOzX8qhDS4 00:21:57.799 08:56:14 -- common/autotest_common.sh@638 -- # local es=0 00:21:57.799 08:56:14 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOzX8qhDS4 00:21:57.799 08:56:14 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:57.799 08:56:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:57.799 08:56:14 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:57.799 08:56:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:57.799 08:56:14 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xOzX8qhDS4 00:21:57.799 08:56:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:57.799 08:56:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:57.799 08:56:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:57.799 08:56:14 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xOzX8qhDS4' 00:21:57.799 08:56:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.799 08:56:14 -- target/tls.sh@28 -- # bdevperf_pid=2111501 00:21:57.799 08:56:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:57.799 08:56:14 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:57.799 08:56:14 -- target/tls.sh@31 -- # waitforlisten 2111501 /var/tmp/bdevperf.sock 00:21:57.799 08:56:14 -- common/autotest_common.sh@817 -- # '[' -z 2111501 ']' 00:21:57.799 08:56:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.799 08:56:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:57.799 08:56:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.799 08:56:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:57.799 08:56:14 -- common/autotest_common.sh@10 -- # set +x 00:21:57.799 [2024-04-26 08:56:14.968889] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:57.799 [2024-04-26 08:56:14.968944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111501 ] 00:21:57.799 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.799 [2024-04-26 08:56:15.035679] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.057 [2024-04-26 08:56:15.106185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.625 08:56:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:58.625 08:56:15 -- common/autotest_common.sh@850 -- # return 0 00:21:58.625 08:56:15 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOzX8qhDS4 00:21:58.883 [2024-04-26 08:56:15.908930] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.883 [2024-04-26 08:56:15.908985] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:58.883 [2024-04-26 08:56:15.908994] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.xOzX8qhDS4 00:21:58.883 request: 00:21:58.883 { 00:21:58.883 "name": "TLSTEST", 00:21:58.883 "trtype": "tcp", 00:21:58.883 "traddr": "10.0.0.2", 00:21:58.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.883 "adrfam": "ipv4", 00:21:58.883 "trsvcid": "4420", 00:21:58.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.883 "psk": "/tmp/tmp.xOzX8qhDS4", 00:21:58.883 "method": "bdev_nvme_attach_controller", 00:21:58.883 "req_id": 1 00:21:58.883 } 00:21:58.883 Got JSON-RPC error response 00:21:58.883 response: 00:21:58.883 { 00:21:58.883 "code": -1, 00:21:58.883 "message": "Operation not permitted" 00:21:58.883 } 00:21:58.883 08:56:15 -- target/tls.sh@36 -- # killprocess 2111501 00:21:58.883 08:56:15 -- common/autotest_common.sh@936 -- # '[' -z 2111501 ']' 00:21:58.883 08:56:15 -- common/autotest_common.sh@940 -- # kill -0 2111501 00:21:58.883 08:56:15 -- common/autotest_common.sh@941 -- # uname 00:21:58.884 08:56:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:58.884 08:56:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2111501 00:21:58.884 08:56:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:58.884 08:56:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:58.884 08:56:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2111501' 00:21:58.884 killing process with pid 2111501 00:21:58.884 08:56:15 -- common/autotest_common.sh@955 -- # kill 2111501 00:21:58.884 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.884 00:21:58.884 Latency(us) 00:21:58.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.884 =================================================================================================================== 00:21:58.884 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:58.884 08:56:15 -- common/autotest_common.sh@960 -- # wait 2111501 00:21:59.142 08:56:16 -- target/tls.sh@37 -- # return 1 00:21:59.142 08:56:16 -- common/autotest_common.sh@641 -- # es=1 00:21:59.142 08:56:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:59.142 08:56:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:59.142 08:56:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:59.142 08:56:16 -- target/tls.sh@174 -- # killprocess 2109268 00:21:59.142 08:56:16 -- common/autotest_common.sh@936 -- # '[' -z 2109268 ']' 00:21:59.142 08:56:16 -- common/autotest_common.sh@940 -- # kill -0 2109268 00:21:59.142 08:56:16 -- common/autotest_common.sh@941 -- # uname 00:21:59.142 08:56:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:59.143 08:56:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2109268 00:21:59.143 08:56:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:59.143 08:56:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:59.143 08:56:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2109268' 00:21:59.143 killing process with pid 2109268 00:21:59.143 08:56:16 -- common/autotest_common.sh@955 -- # kill 2109268 00:21:59.143 [2024-04-26 08:56:16.233334] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:59.143 08:56:16 -- common/autotest_common.sh@960 -- # wait 2109268 00:21:59.401 08:56:16 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:59.401 08:56:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:59.401 08:56:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:59.401 08:56:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.401 08:56:16 -- nvmf/common.sh@470 -- # nvmfpid=2111764 00:21:59.401 08:56:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:59.401 08:56:16 -- nvmf/common.sh@471 -- # waitforlisten 2111764 00:21:59.401 08:56:16 -- common/autotest_common.sh@817 -- # '[' -z 2111764 ']' 00:21:59.401 08:56:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.401 08:56:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:59.401 08:56:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.401 08:56:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:59.401 08:56:16 -- common/autotest_common.sh@10 -- # set +x 00:21:59.401 [2024-04-26 08:56:16.504532] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:21:59.401 [2024-04-26 08:56:16.504584] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.401 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.401 [2024-04-26 08:56:16.576596] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.401 [2024-04-26 08:56:16.647350] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.401 [2024-04-26 08:56:16.647386] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.402 [2024-04-26 08:56:16.647396] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.402 [2024-04-26 08:56:16.647405] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.402 [2024-04-26 08:56:16.647412] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.402 [2024-04-26 08:56:16.647436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.337 08:56:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:00.337 08:56:17 -- common/autotest_common.sh@850 -- # return 0 00:22:00.337 08:56:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:00.337 08:56:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:00.337 08:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:00.337 08:56:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.337 08:56:17 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.xOzX8qhDS4 00:22:00.337 08:56:17 -- common/autotest_common.sh@638 -- # local es=0 00:22:00.337 08:56:17 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.xOzX8qhDS4 00:22:00.337 08:56:17 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:22:00.337 08:56:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:00.337 08:56:17 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:22:00.337 08:56:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:00.337 08:56:17 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.xOzX8qhDS4 00:22:00.337 08:56:17 -- target/tls.sh@49 -- # local key=/tmp/tmp.xOzX8qhDS4 00:22:00.337 08:56:17 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:00.337 [2024-04-26 08:56:17.502051] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.337 08:56:17 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:00.599 08:56:17 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:00.599 [2024-04-26 08:56:17.826871] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:00.599 [2024-04-26 08:56:17.827067] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.599 08:56:17 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:00.859 malloc0 00:22:00.859 08:56:18 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:01.118 08:56:18 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOzX8qhDS4 00:22:01.118 [2024-04-26 08:56:18.344609] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:01.118 [2024-04-26 08:56:18.344639] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:01.118 [2024-04-26 08:56:18.344659] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:01.118 request: 00:22:01.118 { 00:22:01.118 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.118 "host": "nqn.2016-06.io.spdk:host1", 00:22:01.118 "psk": "/tmp/tmp.xOzX8qhDS4", 00:22:01.118 "method": "nvmf_subsystem_add_host", 00:22:01.118 "req_id": 1 00:22:01.118 } 00:22:01.118 Got JSON-RPC error response 00:22:01.118 response: 00:22:01.118 { 00:22:01.118 "code": -32603, 00:22:01.118 "message": "Internal error" 00:22:01.118 } 00:22:01.118 08:56:18 -- common/autotest_common.sh@641 -- # es=1 00:22:01.118 08:56:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:01.118 08:56:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:01.118 08:56:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:01.118 08:56:18 -- target/tls.sh@180 -- # killprocess 2111764 00:22:01.118 08:56:18 -- common/autotest_common.sh@936 -- # '[' -z 2111764 ']' 00:22:01.118 08:56:18 -- common/autotest_common.sh@940 -- # kill -0 2111764 00:22:01.118 08:56:18 -- common/autotest_common.sh@941 -- # uname 00:22:01.377 08:56:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:01.377 08:56:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2111764 00:22:01.377 08:56:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:01.377 08:56:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:01.377 08:56:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2111764' 00:22:01.377 killing process with pid 2111764 00:22:01.377 08:56:18 -- common/autotest_common.sh@955 -- # kill 2111764 00:22:01.377 08:56:18 -- common/autotest_common.sh@960 -- # wait 2111764 00:22:01.636 08:56:18 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.xOzX8qhDS4 00:22:01.636 08:56:18 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:01.636 08:56:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:01.636 08:56:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:01.636 08:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:01.636 08:56:18 -- nvmf/common.sh@470 -- # nvmfpid=2112267 00:22:01.636 08:56:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:01.636 08:56:18 -- nvmf/common.sh@471 -- # waitforlisten 2112267 00:22:01.636 08:56:18 -- common/autotest_common.sh@817 -- # '[' -z 2112267 ']' 00:22:01.636 08:56:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.636 08:56:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:01.636 08:56:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.636 08:56:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:01.636 08:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:01.636 [2024-04-26 08:56:18.691046] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:01.636 [2024-04-26 08:56:18.691096] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.636 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.636 [2024-04-26 08:56:18.764483] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.636 [2024-04-26 08:56:18.835120] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.636 [2024-04-26 08:56:18.835156] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.636 [2024-04-26 08:56:18.835165] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.636 [2024-04-26 08:56:18.835174] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.636 [2024-04-26 08:56:18.835181] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.636 [2024-04-26 08:56:18.835206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.574 08:56:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:02.574 08:56:19 -- common/autotest_common.sh@850 -- # return 0 00:22:02.574 08:56:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:02.574 08:56:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:02.574 08:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:02.574 08:56:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.574 08:56:19 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.xOzX8qhDS4 00:22:02.574 08:56:19 -- target/tls.sh@49 -- # local key=/tmp/tmp.xOzX8qhDS4 00:22:02.574 08:56:19 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:02.574 [2024-04-26 08:56:19.681939] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.574 08:56:19 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:02.833 08:56:19 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:02.833 [2024-04-26 08:56:20.010790] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.833 [2024-04-26 08:56:20.011022] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.833 08:56:20 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:03.093 malloc0 00:22:03.093 08:56:20 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:03.352 08:56:20 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOzX8qhDS4 00:22:03.352 [2024-04-26 08:56:20.556742] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:03.352 08:56:20 -- target/tls.sh@188 -- # bdevperf_pid=2112561 00:22:03.352 08:56:20 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.352 08:56:20 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.352 08:56:20 -- target/tls.sh@191 -- # waitforlisten 2112561 /var/tmp/bdevperf.sock 00:22:03.352 08:56:20 -- common/autotest_common.sh@817 -- # '[' -z 2112561 ']' 00:22:03.352 08:56:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.352 08:56:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:03.352 08:56:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.352 08:56:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:03.352 08:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:03.612 [2024-04-26 08:56:20.619229] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:03.612 [2024-04-26 08:56:20.619278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112561 ] 00:22:03.612 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.612 [2024-04-26 08:56:20.685303] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.612 [2024-04-26 08:56:20.755761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.181 08:56:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:04.181 08:56:21 -- common/autotest_common.sh@850 -- # return 0 00:22:04.181 08:56:21 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOzX8qhDS4 00:22:04.440 [2024-04-26 08:56:21.538898] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:04.440 [2024-04-26 08:56:21.538968] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:04.440 TLSTESTn1 00:22:04.440 08:56:21 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:04.699 08:56:21 -- target/tls.sh@196 -- # tgtconf='{ 00:22:04.699 "subsystems": [ 00:22:04.699 { 00:22:04.699 "subsystem": "keyring", 00:22:04.699 "config": [] 00:22:04.699 }, 00:22:04.699 { 00:22:04.699 "subsystem": "iobuf", 00:22:04.699 "config": [ 00:22:04.699 { 00:22:04.699 "method": "iobuf_set_options", 00:22:04.699 "params": { 00:22:04.699 "small_pool_count": 8192, 00:22:04.699 "large_pool_count": 1024, 00:22:04.699 "small_bufsize": 8192, 00:22:04.699 "large_bufsize": 135168 00:22:04.699 } 00:22:04.699 } 00:22:04.699 ] 00:22:04.699 }, 00:22:04.699 { 00:22:04.699 "subsystem": "sock", 00:22:04.699 "config": [ 00:22:04.699 { 00:22:04.699 "method": "sock_impl_set_options", 00:22:04.699 "params": { 00:22:04.699 "impl_name": "posix", 00:22:04.699 "recv_buf_size": 2097152, 00:22:04.699 "send_buf_size": 2097152, 00:22:04.699 "enable_recv_pipe": true, 00:22:04.699 "enable_quickack": false, 00:22:04.699 "enable_placement_id": 0, 00:22:04.699 "enable_zerocopy_send_server": true, 00:22:04.699 "enable_zerocopy_send_client": false, 00:22:04.699 "zerocopy_threshold": 0, 00:22:04.699 "tls_version": 0, 00:22:04.699 "enable_ktls": false 00:22:04.699 } 00:22:04.699 }, 00:22:04.699 { 00:22:04.699 "method": "sock_impl_set_options", 00:22:04.699 "params": { 00:22:04.699 "impl_name": "ssl", 00:22:04.699 "recv_buf_size": 4096, 00:22:04.699 "send_buf_size": 4096, 00:22:04.699 "enable_recv_pipe": true, 00:22:04.699 "enable_quickack": false, 00:22:04.699 "enable_placement_id": 0, 00:22:04.699 "enable_zerocopy_send_server": true, 00:22:04.699 "enable_zerocopy_send_client": false, 00:22:04.699 "zerocopy_threshold": 0, 00:22:04.699 "tls_version": 0, 00:22:04.699 "enable_ktls": false 00:22:04.699 } 00:22:04.699 } 00:22:04.699 ] 00:22:04.699 }, 00:22:04.699 { 00:22:04.699 "subsystem": "vmd", 00:22:04.699 "config": [] 00:22:04.699 }, 00:22:04.699 { 00:22:04.699 "subsystem": "accel", 00:22:04.699 "config": [ 00:22:04.699 { 00:22:04.699 "method": "accel_set_options", 00:22:04.699 "params": { 00:22:04.699 "small_cache_size": 128, 00:22:04.699 "large_cache_size": 16, 00:22:04.699 "task_count": 2048, 00:22:04.699 "sequence_count": 2048, 00:22:04.699 "buf_count": 2048 00:22:04.699 } 00:22:04.699 } 00:22:04.699 ] 00:22:04.699 }, 00:22:04.699 { 00:22:04.699 "subsystem": "bdev", 00:22:04.699 "config": [ 00:22:04.699 { 00:22:04.699 "method": "bdev_set_options", 00:22:04.699 "params": { 00:22:04.699 "bdev_io_pool_size": 65535, 00:22:04.699 "bdev_io_cache_size": 256, 00:22:04.699 "bdev_auto_examine": true, 00:22:04.699 "iobuf_small_cache_size": 128, 00:22:04.699 "iobuf_large_cache_size": 16 00:22:04.699 } 00:22:04.699 }, 00:22:04.699 { 00:22:04.699 "method": "bdev_raid_set_options", 00:22:04.699 "params": { 00:22:04.699 "process_window_size_kb": 1024 00:22:04.699 } 00:22:04.699 }, 00:22:04.699 { 00:22:04.699 "method": "bdev_iscsi_set_options", 00:22:04.699 "params": { 00:22:04.699 "timeout_sec": 30 00:22:04.699 } 00:22:04.699 }, 00:22:04.699 { 00:22:04.699 "method": "bdev_nvme_set_options", 00:22:04.699 "params": { 00:22:04.699 "action_on_timeout": "none", 00:22:04.699 "timeout_us": 0, 00:22:04.699 "timeout_admin_us": 0, 00:22:04.699 "keep_alive_timeout_ms": 10000, 00:22:04.699 "arbitration_burst": 0, 00:22:04.699 "low_priority_weight": 0, 00:22:04.699 "medium_priority_weight": 0, 00:22:04.699 "high_priority_weight": 0, 00:22:04.699 "nvme_adminq_poll_period_us": 10000, 00:22:04.699 "nvme_ioq_poll_period_us": 0, 00:22:04.699 "io_queue_requests": 0, 00:22:04.699 "delay_cmd_submit": true, 00:22:04.699 "transport_retry_count": 4, 00:22:04.699 "bdev_retry_count": 3, 00:22:04.699 "transport_ack_timeout": 0, 00:22:04.699 "ctrlr_loss_timeout_sec": 0, 00:22:04.699 "reconnect_delay_sec": 0, 00:22:04.699 "fast_io_fail_timeout_sec": 0, 00:22:04.699 "disable_auto_failback": false, 00:22:04.699 "generate_uuids": false, 00:22:04.699 "transport_tos": 0, 00:22:04.699 "nvme_error_stat": false, 00:22:04.699 "rdma_srq_size": 0, 00:22:04.699 "io_path_stat": false, 00:22:04.699 "allow_accel_sequence": false, 00:22:04.699 "rdma_max_cq_size": 0, 00:22:04.700 "rdma_cm_event_timeout_ms": 0, 00:22:04.700 "dhchap_digests": [ 00:22:04.700 "sha256", 00:22:04.700 "sha384", 00:22:04.700 "sha512" 00:22:04.700 ], 00:22:04.700 "dhchap_dhgroups": [ 00:22:04.700 "null", 00:22:04.700 "ffdhe2048", 00:22:04.700 "ffdhe3072", 00:22:04.700 "ffdhe4096", 00:22:04.700 "ffdhe6144", 00:22:04.700 "ffdhe8192" 00:22:04.700 ] 00:22:04.700 } 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "method": "bdev_nvme_set_hotplug", 00:22:04.700 "params": { 00:22:04.700 "period_us": 100000, 00:22:04.700 "enable": false 00:22:04.700 } 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "method": "bdev_malloc_create", 00:22:04.700 "params": { 00:22:04.700 "name": "malloc0", 00:22:04.700 "num_blocks": 8192, 00:22:04.700 "block_size": 4096, 00:22:04.700 "physical_block_size": 4096, 00:22:04.700 "uuid": "74fc36d4-8a7d-433d-87a7-cadcf3dfd0ca", 00:22:04.700 "optimal_io_boundary": 0 00:22:04.700 } 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "method": "bdev_wait_for_examine" 00:22:04.700 } 00:22:04.700 ] 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "subsystem": "nbd", 00:22:04.700 "config": [] 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "subsystem": "scheduler", 00:22:04.700 "config": [ 00:22:04.700 { 00:22:04.700 "method": "framework_set_scheduler", 00:22:04.700 "params": { 00:22:04.700 "name": "static" 00:22:04.700 } 00:22:04.700 } 00:22:04.700 ] 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "subsystem": "nvmf", 00:22:04.700 "config": [ 00:22:04.700 { 00:22:04.700 "method": "nvmf_set_config", 00:22:04.700 "params": { 00:22:04.700 "discovery_filter": "match_any", 00:22:04.700 "admin_cmd_passthru": { 00:22:04.700 "identify_ctrlr": false 00:22:04.700 } 00:22:04.700 } 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "method": "nvmf_set_max_subsystems", 00:22:04.700 "params": { 00:22:04.700 "max_subsystems": 1024 00:22:04.700 } 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "method": "nvmf_set_crdt", 00:22:04.700 "params": { 00:22:04.700 "crdt1": 0, 00:22:04.700 "crdt2": 0, 00:22:04.700 "crdt3": 0 00:22:04.700 } 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "method": "nvmf_create_transport", 00:22:04.700 "params": { 00:22:04.700 "trtype": "TCP", 00:22:04.700 "max_queue_depth": 128, 00:22:04.700 "max_io_qpairs_per_ctrlr": 127, 00:22:04.700 "in_capsule_data_size": 4096, 00:22:04.700 "max_io_size": 131072, 00:22:04.700 "io_unit_size": 131072, 00:22:04.700 "max_aq_depth": 128, 00:22:04.700 "num_shared_buffers": 511, 00:22:04.700 "buf_cache_size": 4294967295, 00:22:04.700 "dif_insert_or_strip": false, 00:22:04.700 "zcopy": false, 00:22:04.700 "c2h_success": false, 00:22:04.700 "sock_priority": 0, 00:22:04.700 "abort_timeout_sec": 1, 00:22:04.700 "ack_timeout": 0, 00:22:04.700 "data_wr_pool_size": 0 00:22:04.700 } 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "method": "nvmf_create_subsystem", 00:22:04.700 "params": { 00:22:04.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.700 "allow_any_host": false, 00:22:04.700 "serial_number": "SPDK00000000000001", 00:22:04.700 "model_number": "SPDK bdev Controller", 00:22:04.700 "max_namespaces": 10, 00:22:04.700 "min_cntlid": 1, 00:22:04.700 "max_cntlid": 65519, 00:22:04.700 "ana_reporting": false 00:22:04.700 } 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "method": "nvmf_subsystem_add_host", 00:22:04.700 "params": { 00:22:04.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.700 "host": "nqn.2016-06.io.spdk:host1", 00:22:04.700 "psk": "/tmp/tmp.xOzX8qhDS4" 00:22:04.700 } 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "method": "nvmf_subsystem_add_ns", 00:22:04.700 "params": { 00:22:04.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.700 "namespace": { 00:22:04.700 "nsid": 1, 00:22:04.700 "bdev_name": "malloc0", 00:22:04.700 "nguid": "74FC36D48A7D433D87A7CADCF3DFD0CA", 00:22:04.700 "uuid": "74fc36d4-8a7d-433d-87a7-cadcf3dfd0ca", 00:22:04.700 "no_auto_visible": false 00:22:04.700 } 00:22:04.700 } 00:22:04.700 }, 00:22:04.700 { 00:22:04.700 "method": "nvmf_subsystem_add_listener", 00:22:04.700 "params": { 00:22:04.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.700 "listen_address": { 00:22:04.700 "trtype": "TCP", 00:22:04.700 "adrfam": "IPv4", 00:22:04.700 "traddr": "10.0.0.2", 00:22:04.700 "trsvcid": "4420" 00:22:04.700 }, 00:22:04.700 "secure_channel": true 00:22:04.700 } 00:22:04.700 } 00:22:04.700 ] 00:22:04.700 } 00:22:04.700 ] 00:22:04.700 }' 00:22:04.700 08:56:21 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:04.960 08:56:22 -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:04.960 "subsystems": [ 00:22:04.960 { 00:22:04.960 "subsystem": "keyring", 00:22:04.960 "config": [] 00:22:04.960 }, 00:22:04.960 { 00:22:04.960 "subsystem": "iobuf", 00:22:04.960 "config": [ 00:22:04.960 { 00:22:04.960 "method": "iobuf_set_options", 00:22:04.960 "params": { 00:22:04.960 "small_pool_count": 8192, 00:22:04.960 "large_pool_count": 1024, 00:22:04.960 "small_bufsize": 8192, 00:22:04.960 "large_bufsize": 135168 00:22:04.960 } 00:22:04.960 } 00:22:04.960 ] 00:22:04.960 }, 00:22:04.960 { 00:22:04.960 "subsystem": "sock", 00:22:04.960 "config": [ 00:22:04.960 { 00:22:04.960 "method": "sock_impl_set_options", 00:22:04.960 "params": { 00:22:04.960 "impl_name": "posix", 00:22:04.960 "recv_buf_size": 2097152, 00:22:04.960 "send_buf_size": 2097152, 00:22:04.960 "enable_recv_pipe": true, 00:22:04.960 "enable_quickack": false, 00:22:04.960 "enable_placement_id": 0, 00:22:04.960 "enable_zerocopy_send_server": true, 00:22:04.960 "enable_zerocopy_send_client": false, 00:22:04.960 "zerocopy_threshold": 0, 00:22:04.960 "tls_version": 0, 00:22:04.960 "enable_ktls": false 00:22:04.960 } 00:22:04.960 }, 00:22:04.960 { 00:22:04.960 "method": "sock_impl_set_options", 00:22:04.960 "params": { 00:22:04.960 "impl_name": "ssl", 00:22:04.960 "recv_buf_size": 4096, 00:22:04.960 "send_buf_size": 4096, 00:22:04.960 "enable_recv_pipe": true, 00:22:04.960 "enable_quickack": false, 00:22:04.960 "enable_placement_id": 0, 00:22:04.960 "enable_zerocopy_send_server": true, 00:22:04.960 "enable_zerocopy_send_client": false, 00:22:04.960 "zerocopy_threshold": 0, 00:22:04.960 "tls_version": 0, 00:22:04.960 "enable_ktls": false 00:22:04.960 } 00:22:04.960 } 00:22:04.960 ] 00:22:04.960 }, 00:22:04.960 { 00:22:04.960 "subsystem": "vmd", 00:22:04.960 "config": [] 00:22:04.960 }, 00:22:04.960 { 00:22:04.960 "subsystem": "accel", 00:22:04.960 "config": [ 00:22:04.960 { 00:22:04.960 "method": "accel_set_options", 00:22:04.960 "params": { 00:22:04.960 "small_cache_size": 128, 00:22:04.960 "large_cache_size": 16, 00:22:04.960 "task_count": 2048, 00:22:04.960 "sequence_count": 2048, 00:22:04.960 "buf_count": 2048 00:22:04.960 } 00:22:04.960 } 00:22:04.960 ] 00:22:04.960 }, 00:22:04.960 { 00:22:04.960 "subsystem": "bdev", 00:22:04.960 "config": [ 00:22:04.960 { 00:22:04.960 "method": "bdev_set_options", 00:22:04.960 "params": { 00:22:04.960 "bdev_io_pool_size": 65535, 00:22:04.960 "bdev_io_cache_size": 256, 00:22:04.960 "bdev_auto_examine": true, 00:22:04.960 "iobuf_small_cache_size": 128, 00:22:04.960 "iobuf_large_cache_size": 16 00:22:04.960 } 00:22:04.960 }, 00:22:04.960 { 00:22:04.960 "method": "bdev_raid_set_options", 00:22:04.960 "params": { 00:22:04.960 "process_window_size_kb": 1024 00:22:04.960 } 00:22:04.960 }, 00:22:04.960 { 00:22:04.960 "method": "bdev_iscsi_set_options", 00:22:04.960 "params": { 00:22:04.960 "timeout_sec": 30 00:22:04.960 } 00:22:04.960 }, 00:22:04.960 { 00:22:04.960 "method": "bdev_nvme_set_options", 00:22:04.960 "params": { 00:22:04.960 "action_on_timeout": "none", 00:22:04.960 "timeout_us": 0, 00:22:04.960 "timeout_admin_us": 0, 00:22:04.960 "keep_alive_timeout_ms": 10000, 00:22:04.960 "arbitration_burst": 0, 00:22:04.960 "low_priority_weight": 0, 00:22:04.960 "medium_priority_weight": 0, 00:22:04.960 "high_priority_weight": 0, 00:22:04.960 "nvme_adminq_poll_period_us": 10000, 00:22:04.960 "nvme_ioq_poll_period_us": 0, 00:22:04.960 "io_queue_requests": 512, 00:22:04.960 "delay_cmd_submit": true, 00:22:04.960 "transport_retry_count": 4, 00:22:04.960 "bdev_retry_count": 3, 00:22:04.960 "transport_ack_timeout": 0, 00:22:04.960 "ctrlr_loss_timeout_sec": 0, 00:22:04.960 "reconnect_delay_sec": 0, 00:22:04.960 "fast_io_fail_timeout_sec": 0, 00:22:04.960 "disable_auto_failback": false, 00:22:04.960 "generate_uuids": false, 00:22:04.960 "transport_tos": 0, 00:22:04.960 "nvme_error_stat": false, 00:22:04.960 "rdma_srq_size": 0, 00:22:04.960 "io_path_stat": false, 00:22:04.960 "allow_accel_sequence": false, 00:22:04.960 "rdma_max_cq_size": 0, 00:22:04.960 "rdma_cm_event_timeout_ms": 0, 00:22:04.960 "dhchap_digests": [ 00:22:04.960 "sha256", 00:22:04.960 "sha384", 00:22:04.960 "sha512" 00:22:04.960 ], 00:22:04.960 "dhchap_dhgroups": [ 00:22:04.960 "null", 00:22:04.960 "ffdhe2048", 00:22:04.960 "ffdhe3072", 00:22:04.960 "ffdhe4096", 00:22:04.961 "ffdhe6144", 00:22:04.961 "ffdhe8192" 00:22:04.961 ] 00:22:04.961 } 00:22:04.961 }, 00:22:04.961 { 00:22:04.961 "method": "bdev_nvme_attach_controller", 00:22:04.961 "params": { 00:22:04.961 "name": "TLSTEST", 00:22:04.961 "trtype": "TCP", 00:22:04.961 "adrfam": "IPv4", 00:22:04.961 "traddr": "10.0.0.2", 00:22:04.961 "trsvcid": "4420", 00:22:04.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.961 "prchk_reftag": false, 00:22:04.961 "prchk_guard": false, 00:22:04.961 "ctrlr_loss_timeout_sec": 0, 00:22:04.961 "reconnect_delay_sec": 0, 00:22:04.961 "fast_io_fail_timeout_sec": 0, 00:22:04.961 "psk": "/tmp/tmp.xOzX8qhDS4", 00:22:04.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.961 "hdgst": false, 00:22:04.961 "ddgst": false 00:22:04.961 } 00:22:04.961 }, 00:22:04.961 { 00:22:04.961 "method": "bdev_nvme_set_hotplug", 00:22:04.961 "params": { 00:22:04.961 "period_us": 100000, 00:22:04.961 "enable": false 00:22:04.961 } 00:22:04.961 }, 00:22:04.961 { 00:22:04.961 "method": "bdev_wait_for_examine" 00:22:04.961 } 00:22:04.961 ] 00:22:04.961 }, 00:22:04.961 { 00:22:04.961 "subsystem": "nbd", 00:22:04.961 "config": [] 00:22:04.961 } 00:22:04.961 ] 00:22:04.961 }' 00:22:04.961 08:56:22 -- target/tls.sh@199 -- # killprocess 2112561 00:22:04.961 08:56:22 -- common/autotest_common.sh@936 -- # '[' -z 2112561 ']' 00:22:04.961 08:56:22 -- common/autotest_common.sh@940 -- # kill -0 2112561 00:22:04.961 08:56:22 -- common/autotest_common.sh@941 -- # uname 00:22:04.961 08:56:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:04.961 08:56:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2112561 00:22:04.961 08:56:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:04.961 08:56:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:04.961 08:56:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2112561' 00:22:04.961 killing process with pid 2112561 00:22:04.961 08:56:22 -- common/autotest_common.sh@955 -- # kill 2112561 00:22:04.961 Received shutdown signal, test time was about 10.000000 seconds 00:22:04.961 00:22:04.961 Latency(us) 00:22:04.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.961 =================================================================================================================== 00:22:04.961 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:04.961 [2024-04-26 08:56:22.197014] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:04.961 08:56:22 -- common/autotest_common.sh@960 -- # wait 2112561 00:22:05.220 08:56:22 -- target/tls.sh@200 -- # killprocess 2112267 00:22:05.220 08:56:22 -- common/autotest_common.sh@936 -- # '[' -z 2112267 ']' 00:22:05.220 08:56:22 -- common/autotest_common.sh@940 -- # kill -0 2112267 00:22:05.220 08:56:22 -- common/autotest_common.sh@941 -- # uname 00:22:05.220 08:56:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:05.220 08:56:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2112267 00:22:05.220 08:56:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:05.220 08:56:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:05.220 08:56:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2112267' 00:22:05.220 killing process with pid 2112267 00:22:05.220 08:56:22 -- common/autotest_common.sh@955 -- # kill 2112267 00:22:05.220 [2024-04-26 08:56:22.450860] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:05.220 08:56:22 -- common/autotest_common.sh@960 -- # wait 2112267 00:22:05.479 08:56:22 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:05.479 08:56:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:05.479 08:56:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:05.479 08:56:22 -- target/tls.sh@203 -- # echo '{ 00:22:05.479 "subsystems": [ 00:22:05.479 { 00:22:05.479 "subsystem": "keyring", 00:22:05.479 "config": [] 00:22:05.479 }, 00:22:05.479 { 00:22:05.479 "subsystem": "iobuf", 00:22:05.479 "config": [ 00:22:05.479 { 00:22:05.479 "method": "iobuf_set_options", 00:22:05.479 "params": { 00:22:05.479 "small_pool_count": 8192, 00:22:05.479 "large_pool_count": 1024, 00:22:05.479 "small_bufsize": 8192, 00:22:05.479 "large_bufsize": 135168 00:22:05.479 } 00:22:05.479 } 00:22:05.479 ] 00:22:05.479 }, 00:22:05.479 { 00:22:05.479 "subsystem": "sock", 00:22:05.479 "config": [ 00:22:05.479 { 00:22:05.479 "method": "sock_impl_set_options", 00:22:05.479 "params": { 00:22:05.479 "impl_name": "posix", 00:22:05.479 "recv_buf_size": 2097152, 00:22:05.479 "send_buf_size": 2097152, 00:22:05.479 "enable_recv_pipe": true, 00:22:05.479 "enable_quickack": false, 00:22:05.479 "enable_placement_id": 0, 00:22:05.479 "enable_zerocopy_send_server": true, 00:22:05.479 "enable_zerocopy_send_client": false, 00:22:05.479 "zerocopy_threshold": 0, 00:22:05.479 "tls_version": 0, 00:22:05.479 "enable_ktls": false 00:22:05.479 } 00:22:05.479 }, 00:22:05.479 { 00:22:05.479 "method": "sock_impl_set_options", 00:22:05.479 "params": { 00:22:05.479 "impl_name": "ssl", 00:22:05.479 "recv_buf_size": 4096, 00:22:05.479 "send_buf_size": 4096, 00:22:05.479 "enable_recv_pipe": true, 00:22:05.479 "enable_quickack": false, 00:22:05.480 "enable_placement_id": 0, 00:22:05.480 "enable_zerocopy_send_server": true, 00:22:05.480 "enable_zerocopy_send_client": false, 00:22:05.480 "zerocopy_threshold": 0, 00:22:05.480 "tls_version": 0, 00:22:05.480 "enable_ktls": false 00:22:05.480 } 00:22:05.480 } 00:22:05.480 ] 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "subsystem": "vmd", 00:22:05.480 "config": [] 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "subsystem": "accel", 00:22:05.480 "config": [ 00:22:05.480 { 00:22:05.480 "method": "accel_set_options", 00:22:05.480 "params": { 00:22:05.480 "small_cache_size": 128, 00:22:05.480 "large_cache_size": 16, 00:22:05.480 "task_count": 2048, 00:22:05.480 "sequence_count": 2048, 00:22:05.480 "buf_count": 2048 00:22:05.480 } 00:22:05.480 } 00:22:05.480 ] 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "subsystem": "bdev", 00:22:05.480 "config": [ 00:22:05.480 { 00:22:05.480 "method": "bdev_set_options", 00:22:05.480 "params": { 00:22:05.480 "bdev_io_pool_size": 65535, 00:22:05.480 "bdev_io_cache_size": 256, 00:22:05.480 "bdev_auto_examine": true, 00:22:05.480 "iobuf_small_cache_size": 128, 00:22:05.480 "iobuf_large_cache_size": 16 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "bdev_raid_set_options", 00:22:05.480 "params": { 00:22:05.480 "process_window_size_kb": 1024 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "bdev_iscsi_set_options", 00:22:05.480 "params": { 00:22:05.480 "timeout_sec": 30 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "bdev_nvme_set_options", 00:22:05.480 "params": { 00:22:05.480 "action_on_timeout": "none", 00:22:05.480 "timeout_us": 0, 00:22:05.480 "timeout_admin_us": 0, 00:22:05.480 "keep_alive_timeout_ms": 10000, 00:22:05.480 "arbitration_burst": 0, 00:22:05.480 "low_priority_weight": 0, 00:22:05.480 "medium_priority_weight": 0, 00:22:05.480 "high_priority_weight": 0, 00:22:05.480 "nvme_adminq_poll_period_us": 10000, 00:22:05.480 "nvme_ioq_poll_period_us": 0, 00:22:05.480 "io_queue_requests": 0, 00:22:05.480 "delay_cmd_submit": true, 00:22:05.480 "transport_retry_count": 4, 00:22:05.480 "bdev_retry_count": 3, 00:22:05.480 "transport_ack_timeout": 0, 00:22:05.480 "ctrlr_loss_timeout_sec": 0, 00:22:05.480 "reconnect_delay_sec": 0, 00:22:05.480 "fast_io_fail_timeout_sec": 0, 00:22:05.480 "disable_auto_failback": false, 00:22:05.480 "generate_uuids": false, 00:22:05.480 "transport_tos": 0, 00:22:05.480 "nvme_error_stat": false, 00:22:05.480 "rdma_srq_size": 0, 00:22:05.480 "io_path_stat": false, 00:22:05.480 "allow_accel_sequence": false, 00:22:05.480 "rdma_max_cq_size": 0, 00:22:05.480 "rdma_cm_event_timeout_ms": 0, 00:22:05.480 "dhchap_digests": [ 00:22:05.480 "sha256", 00:22:05.480 "sha384", 00:22:05.480 "sha512" 00:22:05.480 ], 00:22:05.480 "dhchap_dhgroups": [ 00:22:05.480 "null", 00:22:05.480 "ffdhe2048", 00:22:05.480 "ffdhe3072", 00:22:05.480 "ffdhe4096", 00:22:05.480 "ffdhe6144", 00:22:05.480 "ffdhe8192" 00:22:05.480 ] 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "bdev_nvme_set_hotplug", 00:22:05.480 "params": { 00:22:05.480 "period_us": 100000, 00:22:05.480 "enable": false 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "bdev_malloc_create", 00:22:05.480 "params": { 00:22:05.480 "name": "malloc0", 00:22:05.480 "num_blocks": 8192, 00:22:05.480 "block_size": 4096, 00:22:05.480 "physical_block_size": 4096, 00:22:05.480 "uuid": "74fc36d4-8a7d-433d-87a7-cadcf3dfd0ca", 00:22:05.480 "optimal_io_boundary": 0 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "bdev_wait_for_examine" 00:22:05.480 } 00:22:05.480 ] 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "subsystem": "nbd", 00:22:05.480 "config": [] 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "subsystem": "scheduler", 00:22:05.480 "config": [ 00:22:05.480 { 00:22:05.480 "method": "framework_set_scheduler", 00:22:05.480 "params": { 00:22:05.480 "name": "static" 00:22:05.480 } 00:22:05.480 } 00:22:05.480 ] 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "subsystem": "nvmf", 00:22:05.480 "config": [ 00:22:05.480 { 00:22:05.480 "method": "nvmf_set_config", 00:22:05.480 "params": { 00:22:05.480 "discovery_filter": "match_any", 00:22:05.480 "admin_cmd_passthru": { 00:22:05.480 "identify_ctrlr": false 00:22:05.480 } 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "nvmf_set_max_subsystems", 00:22:05.480 "params": { 00:22:05.480 "max_subsystems": 1024 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "nvmf_set_crdt", 00:22:05.480 "params": { 00:22:05.480 "crdt1": 0, 00:22:05.480 "crdt2": 0, 00:22:05.480 "crdt3": 0 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "nvmf_create_transport", 00:22:05.480 "params": { 00:22:05.480 "trtype": "TCP", 00:22:05.480 "max_queue_depth": 128, 00:22:05.480 "max_io_qpairs_per_ctrlr": 127, 00:22:05.480 "in_capsule_data_size": 4096, 00:22:05.480 "max_io_size": 131072, 00:22:05.480 "io_unit_size": 131072, 00:22:05.480 "max_aq_depth": 128, 00:22:05.480 "num_shared_buffers": 511, 00:22:05.480 "buf_cache_size": 4294967295, 00:22:05.480 "dif_insert_or_strip": false, 00:22:05.480 "zcopy": false, 00:22:05.480 "c2h_success": false, 00:22:05.480 "sock_priority": 0, 00:22:05.480 "abort_timeout_sec": 1, 00:22:05.480 "ack_timeout": 0, 00:22:05.480 "data_wr_pool_size": 0 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "nvmf_create_subsystem", 00:22:05.480 "params": { 00:22:05.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.480 "allow_any_host": false, 00:22:05.480 "serial_number": "SPDK00000000000001", 00:22:05.480 "model_number": "SPDK bdev Controller", 00:22:05.480 "max_namespaces": 10, 00:22:05.480 "min_cntlid": 1, 00:22:05.480 "max_cntlid": 65519, 00:22:05.480 "ana_reporting": false 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "nvmf_subsystem_add_host", 00:22:05.480 "params": { 00:22:05.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.480 "host": "nqn.2016-06.io.spdk:host1", 00:22:05.480 "psk": "/tmp/tmp.xOzX8qhDS4" 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "nvmf_subsystem_add_ns", 00:22:05.480 "params": { 00:22:05.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.480 "namespace": { 00:22:05.480 "nsid": 1, 00:22:05.480 "bdev_name": "malloc0", 00:22:05.480 "nguid": "74FC36D48A7D433D87A7CADCF3DFD0CA", 00:22:05.480 "uuid": "74fc36d4-8a7d-433d-87a7-cadcf3dfd0ca", 00:22:05.480 "no_auto_visible": false 00:22:05.480 } 00:22:05.480 } 00:22:05.480 }, 00:22:05.480 { 00:22:05.480 "method": "nvmf_subsystem_add_listener", 00:22:05.480 "params": { 00:22:05.480 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.480 "listen_address": { 00:22:05.480 "trtype": "TCP", 00:22:05.480 "adrfam": "IPv4", 00:22:05.480 "traddr": "10.0.0.2", 00:22:05.480 "trsvcid": "4420" 00:22:05.480 }, 00:22:05.480 "secure_channel": true 00:22:05.480 } 00:22:05.480 } 00:22:05.480 ] 00:22:05.480 } 00:22:05.480 ] 00:22:05.480 }' 00:22:05.480 08:56:22 -- common/autotest_common.sh@10 -- # set +x 00:22:05.480 08:56:22 -- nvmf/common.sh@470 -- # nvmfpid=2112860 00:22:05.480 08:56:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:05.480 08:56:22 -- nvmf/common.sh@471 -- # waitforlisten 2112860 00:22:05.480 08:56:22 -- common/autotest_common.sh@817 -- # '[' -z 2112860 ']' 00:22:05.480 08:56:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.480 08:56:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:05.481 08:56:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.481 08:56:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:05.481 08:56:22 -- common/autotest_common.sh@10 -- # set +x 00:22:05.481 [2024-04-26 08:56:22.716598] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:05.481 [2024-04-26 08:56:22.716650] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.739 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.739 [2024-04-26 08:56:22.791567] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.739 [2024-04-26 08:56:22.857581] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.739 [2024-04-26 08:56:22.857621] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.739 [2024-04-26 08:56:22.857634] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.739 [2024-04-26 08:56:22.857659] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.739 [2024-04-26 08:56:22.857666] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.739 [2024-04-26 08:56:22.857734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.998 [2024-04-26 08:56:23.051056] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.998 [2024-04-26 08:56:23.067032] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:05.998 [2024-04-26 08:56:23.083088] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:05.998 [2024-04-26 08:56:23.091587] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.567 08:56:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:06.567 08:56:23 -- common/autotest_common.sh@850 -- # return 0 00:22:06.567 08:56:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:06.567 08:56:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:06.567 08:56:23 -- common/autotest_common.sh@10 -- # set +x 00:22:06.567 08:56:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.567 08:56:23 -- target/tls.sh@207 -- # bdevperf_pid=2113135 00:22:06.567 08:56:23 -- target/tls.sh@208 -- # waitforlisten 2113135 /var/tmp/bdevperf.sock 00:22:06.567 08:56:23 -- common/autotest_common.sh@817 -- # '[' -z 2113135 ']' 00:22:06.567 08:56:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.567 08:56:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:06.567 08:56:23 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:06.567 08:56:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.567 08:56:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:06.567 08:56:23 -- target/tls.sh@204 -- # echo '{ 00:22:06.567 "subsystems": [ 00:22:06.567 { 00:22:06.567 "subsystem": "keyring", 00:22:06.567 "config": [] 00:22:06.567 }, 00:22:06.567 { 00:22:06.567 "subsystem": "iobuf", 00:22:06.567 "config": [ 00:22:06.567 { 00:22:06.567 "method": "iobuf_set_options", 00:22:06.567 "params": { 00:22:06.567 "small_pool_count": 8192, 00:22:06.567 "large_pool_count": 1024, 00:22:06.567 "small_bufsize": 8192, 00:22:06.567 "large_bufsize": 135168 00:22:06.567 } 00:22:06.567 } 00:22:06.567 ] 00:22:06.567 }, 00:22:06.567 { 00:22:06.567 "subsystem": "sock", 00:22:06.567 "config": [ 00:22:06.567 { 00:22:06.567 "method": "sock_impl_set_options", 00:22:06.567 "params": { 00:22:06.567 "impl_name": "posix", 00:22:06.567 "recv_buf_size": 2097152, 00:22:06.567 "send_buf_size": 2097152, 00:22:06.567 "enable_recv_pipe": true, 00:22:06.567 "enable_quickack": false, 00:22:06.567 "enable_placement_id": 0, 00:22:06.567 "enable_zerocopy_send_server": true, 00:22:06.567 "enable_zerocopy_send_client": false, 00:22:06.567 "zerocopy_threshold": 0, 00:22:06.567 "tls_version": 0, 00:22:06.567 "enable_ktls": false 00:22:06.567 } 00:22:06.567 }, 00:22:06.567 { 00:22:06.567 "method": "sock_impl_set_options", 00:22:06.567 "params": { 00:22:06.567 "impl_name": "ssl", 00:22:06.567 "recv_buf_size": 4096, 00:22:06.567 "send_buf_size": 4096, 00:22:06.567 "enable_recv_pipe": true, 00:22:06.567 "enable_quickack": false, 00:22:06.567 "enable_placement_id": 0, 00:22:06.567 "enable_zerocopy_send_server": true, 00:22:06.567 "enable_zerocopy_send_client": false, 00:22:06.567 "zerocopy_threshold": 0, 00:22:06.567 "tls_version": 0, 00:22:06.567 "enable_ktls": false 00:22:06.567 } 00:22:06.567 } 00:22:06.567 ] 00:22:06.567 }, 00:22:06.567 { 00:22:06.567 "subsystem": "vmd", 00:22:06.567 "config": [] 00:22:06.567 }, 00:22:06.567 { 00:22:06.567 "subsystem": "accel", 00:22:06.567 "config": [ 00:22:06.567 { 00:22:06.567 "method": "accel_set_options", 00:22:06.567 "params": { 00:22:06.567 "small_cache_size": 128, 00:22:06.567 "large_cache_size": 16, 00:22:06.567 "task_count": 2048, 00:22:06.567 "sequence_count": 2048, 00:22:06.567 "buf_count": 2048 00:22:06.567 } 00:22:06.567 } 00:22:06.567 ] 00:22:06.567 }, 00:22:06.567 { 00:22:06.567 "subsystem": "bdev", 00:22:06.567 "config": [ 00:22:06.567 { 00:22:06.567 "method": "bdev_set_options", 00:22:06.568 "params": { 00:22:06.568 "bdev_io_pool_size": 65535, 00:22:06.568 "bdev_io_cache_size": 256, 00:22:06.568 "bdev_auto_examine": true, 00:22:06.568 "iobuf_small_cache_size": 128, 00:22:06.568 "iobuf_large_cache_size": 16 00:22:06.568 } 00:22:06.568 }, 00:22:06.568 { 00:22:06.568 "method": "bdev_raid_set_options", 00:22:06.568 "params": { 00:22:06.568 "process_window_size_kb": 1024 00:22:06.568 } 00:22:06.568 }, 00:22:06.568 { 00:22:06.568 "method": "bdev_iscsi_set_options", 00:22:06.568 "params": { 00:22:06.568 "timeout_sec": 30 00:22:06.568 } 00:22:06.568 }, 00:22:06.568 { 00:22:06.568 "method": "bdev_nvme_set_options", 00:22:06.568 "params": { 00:22:06.568 "action_on_timeout": "none", 00:22:06.568 "timeout_us": 0, 00:22:06.568 "timeout_admin_us": 0, 00:22:06.568 "keep_alive_timeout_ms": 10000, 00:22:06.568 "arbitration_burst": 0, 00:22:06.568 "low_priority_weight": 0, 00:22:06.568 "medium_priority_weight": 0, 00:22:06.568 "high_priority_weight": 0, 00:22:06.568 "nvme_adminq_poll_period_us": 10000, 00:22:06.568 "nvme_ioq_poll_period_us": 0, 00:22:06.568 "io_queue_requests": 512, 00:22:06.568 "delay_cmd_submit": true, 00:22:06.568 "transport_retry_count": 4, 00:22:06.568 "bdev_retry_count": 3, 00:22:06.568 "transport_ack_timeout": 0, 00:22:06.568 "ctrlr_loss_timeout_sec": 0, 00:22:06.568 "reconnect_delay_sec": 0, 00:22:06.568 "fast_io_fail_timeout_sec": 0, 00:22:06.568 "disable_auto_failback": false, 00:22:06.568 "generate_uuids": false, 00:22:06.568 "transport_tos": 0, 00:22:06.568 "nvme_error_stat": false, 00:22:06.568 "rdma_srq_size": 0, 00:22:06.568 "io_path_stat": false, 00:22:06.568 "allow_accel_sequence": false, 00:22:06.568 "rdma_max_cq_size": 0, 00:22:06.568 "rdma_cm_event_timeout_ms": 0, 00:22:06.568 "dhchap_digests": [ 00:22:06.568 "sha256", 00:22:06.568 "sha384", 00:22:06.568 "sha512" 00:22:06.568 ], 00:22:06.568 "dhchap_dhgroups": [ 00:22:06.568 "null", 00:22:06.568 "ffdhe2048", 00:22:06.568 "ffdhe3072", 00:22:06.568 "ffdhe4096", 00:22:06.568 "ffdhe6144", 00:22:06.568 "ffdhe8192" 00:22:06.568 ] 00:22:06.568 } 00:22:06.568 }, 00:22:06.568 { 00:22:06.568 "method": "bdev_nvme_attach_controller", 00:22:06.568 "params": { 00:22:06.568 "name": "TLSTEST", 00:22:06.568 "trtype": "TCP", 00:22:06.568 "adrfam": "IPv4", 00:22:06.568 "traddr": "10.0.0.2", 00:22:06.568 "trsvcid": "4420", 00:22:06.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.568 "prchk_reftag": false, 00:22:06.568 "prchk_guard": false, 00:22:06.568 "ctrlr_loss_timeout_sec": 0, 00:22:06.568 "reconnect_delay_sec": 0, 00:22:06.568 "fast_io_fail_timeout_sec": 0, 00:22:06.568 "psk": "/tmp/tmp.xOzX8qhDS4", 00:22:06.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:06.568 "hdgst": false, 00:22:06.568 "ddgst": false 00:22:06.568 } 00:22:06.568 }, 00:22:06.568 { 00:22:06.568 "method": "bdev_nvme_set_hotplug", 00:22:06.568 "params": { 00:22:06.568 "period_us": 100000, 00:22:06.568 "enable": false 00:22:06.568 } 00:22:06.568 }, 00:22:06.568 { 00:22:06.568 "method": "bdev_wait_for_examine" 00:22:06.568 } 00:22:06.568 ] 00:22:06.568 }, 00:22:06.568 { 00:22:06.568 "subsystem": "nbd", 00:22:06.568 "config": [] 00:22:06.568 } 00:22:06.568 ] 00:22:06.568 }' 00:22:06.568 08:56:23 -- common/autotest_common.sh@10 -- # set +x 00:22:06.568 [2024-04-26 08:56:23.601388] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:06.568 [2024-04-26 08:56:23.601439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2113135 ] 00:22:06.568 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.568 [2024-04-26 08:56:23.669537] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.568 [2024-04-26 08:56:23.740201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.827 [2024-04-26 08:56:23.874819] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:06.827 [2024-04-26 08:56:23.874904] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:07.394 08:56:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:07.394 08:56:24 -- common/autotest_common.sh@850 -- # return 0 00:22:07.394 08:56:24 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:07.394 Running I/O for 10 seconds... 00:22:17.378 00:22:17.378 Latency(us) 00:22:17.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.378 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:17.378 Verification LBA range: start 0x0 length 0x2000 00:22:17.378 TLSTESTn1 : 10.07 1412.70 5.52 0.00 0.00 90351.27 6920.60 132540.01 00:22:17.378 =================================================================================================================== 00:22:17.378 Total : 1412.70 5.52 0.00 0.00 90351.27 6920.60 132540.01 00:22:17.378 0 00:22:17.378 08:56:34 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:17.378 08:56:34 -- target/tls.sh@214 -- # killprocess 2113135 00:22:17.378 08:56:34 -- common/autotest_common.sh@936 -- # '[' -z 2113135 ']' 00:22:17.378 08:56:34 -- common/autotest_common.sh@940 -- # kill -0 2113135 00:22:17.378 08:56:34 -- common/autotest_common.sh@941 -- # uname 00:22:17.378 08:56:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:17.378 08:56:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2113135 00:22:17.636 08:56:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:17.636 08:56:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:17.636 08:56:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2113135' 00:22:17.636 killing process with pid 2113135 00:22:17.636 08:56:34 -- common/autotest_common.sh@955 -- # kill 2113135 00:22:17.636 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.636 00:22:17.636 Latency(us) 00:22:17.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.636 =================================================================================================================== 00:22:17.636 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.636 [2024-04-26 08:56:34.653899] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:17.636 08:56:34 -- common/autotest_common.sh@960 -- # wait 2113135 00:22:17.636 08:56:34 -- target/tls.sh@215 -- # killprocess 2112860 00:22:17.636 08:56:34 -- common/autotest_common.sh@936 -- # '[' -z 2112860 ']' 00:22:17.636 08:56:34 -- common/autotest_common.sh@940 -- # kill -0 2112860 00:22:17.636 08:56:34 -- common/autotest_common.sh@941 -- # uname 00:22:17.636 08:56:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:17.636 08:56:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2112860 00:22:17.896 08:56:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:17.896 08:56:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:17.896 08:56:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2112860' 00:22:17.896 killing process with pid 2112860 00:22:17.896 08:56:34 -- common/autotest_common.sh@955 -- # kill 2112860 00:22:17.896 [2024-04-26 08:56:34.907837] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:17.896 08:56:34 -- common/autotest_common.sh@960 -- # wait 2112860 00:22:17.896 08:56:35 -- target/tls.sh@218 -- # nvmfappstart 00:22:17.896 08:56:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:17.896 08:56:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:17.896 08:56:35 -- common/autotest_common.sh@10 -- # set +x 00:22:17.896 08:56:35 -- nvmf/common.sh@470 -- # nvmfpid=2115009 00:22:17.896 08:56:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:17.896 08:56:35 -- nvmf/common.sh@471 -- # waitforlisten 2115009 00:22:17.896 08:56:35 -- common/autotest_common.sh@817 -- # '[' -z 2115009 ']' 00:22:17.896 08:56:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.896 08:56:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:17.896 08:56:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.896 08:56:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:17.896 08:56:35 -- common/autotest_common.sh@10 -- # set +x 00:22:18.155 [2024-04-26 08:56:35.171759] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:18.155 [2024-04-26 08:56:35.171809] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.155 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.155 [2024-04-26 08:56:35.244184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.155 [2024-04-26 08:56:35.314591] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.155 [2024-04-26 08:56:35.314634] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.155 [2024-04-26 08:56:35.314644] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.155 [2024-04-26 08:56:35.314653] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.155 [2024-04-26 08:56:35.314661] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.155 [2024-04-26 08:56:35.314681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.723 08:56:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:18.723 08:56:35 -- common/autotest_common.sh@850 -- # return 0 00:22:18.723 08:56:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:18.723 08:56:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:18.723 08:56:35 -- common/autotest_common.sh@10 -- # set +x 00:22:18.982 08:56:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.982 08:56:36 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.xOzX8qhDS4 00:22:18.982 08:56:36 -- target/tls.sh@49 -- # local key=/tmp/tmp.xOzX8qhDS4 00:22:18.982 08:56:36 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.982 [2024-04-26 08:56:36.153459] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.982 08:56:36 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:19.241 08:56:36 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:19.500 [2024-04-26 08:56:36.490319] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.500 [2024-04-26 08:56:36.490544] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.500 08:56:36 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:19.500 malloc0 00:22:19.500 08:56:36 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.759 08:56:36 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xOzX8qhDS4 00:22:19.759 [2024-04-26 08:56:36.983763] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:19.759 08:56:37 -- target/tls.sh@222 -- # bdevperf_pid=2115388 00:22:19.759 08:56:37 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:20.018 08:56:37 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:20.018 08:56:37 -- target/tls.sh@225 -- # waitforlisten 2115388 /var/tmp/bdevperf.sock 00:22:20.018 08:56:37 -- common/autotest_common.sh@817 -- # '[' -z 2115388 ']' 00:22:20.018 08:56:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.018 08:56:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:20.018 08:56:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.018 08:56:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:20.018 08:56:37 -- common/autotest_common.sh@10 -- # set +x 00:22:20.018 [2024-04-26 08:56:37.052268] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:20.018 [2024-04-26 08:56:37.052321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115388 ] 00:22:20.018 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.018 [2024-04-26 08:56:37.123329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.018 [2024-04-26 08:56:37.191050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.953 08:56:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:20.953 08:56:37 -- common/autotest_common.sh@850 -- # return 0 00:22:20.953 08:56:37 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xOzX8qhDS4 00:22:20.953 08:56:38 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:20.953 [2024-04-26 08:56:38.157905] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.210 nvme0n1 00:22:21.210 08:56:38 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:21.210 Running I/O for 1 seconds... 00:22:22.584 00:22:22.584 Latency(us) 00:22:22.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.584 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:22.584 Verification LBA range: start 0x0 length 0x2000 00:22:22.584 nvme0n1 : 1.09 1249.33 4.88 0.00 0.00 99152.29 5478.81 130862.28 00:22:22.584 =================================================================================================================== 00:22:22.584 Total : 1249.33 4.88 0.00 0.00 99152.29 5478.81 130862.28 00:22:22.584 0 00:22:22.584 08:56:39 -- target/tls.sh@234 -- # killprocess 2115388 00:22:22.584 08:56:39 -- common/autotest_common.sh@936 -- # '[' -z 2115388 ']' 00:22:22.584 08:56:39 -- common/autotest_common.sh@940 -- # kill -0 2115388 00:22:22.584 08:56:39 -- common/autotest_common.sh@941 -- # uname 00:22:22.584 08:56:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:22.584 08:56:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2115388 00:22:22.584 08:56:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:22.584 08:56:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:22.584 08:56:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2115388' 00:22:22.584 killing process with pid 2115388 00:22:22.584 08:56:39 -- common/autotest_common.sh@955 -- # kill 2115388 00:22:22.584 Received shutdown signal, test time was about 1.000000 seconds 00:22:22.584 00:22:22.584 Latency(us) 00:22:22.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.584 =================================================================================================================== 00:22:22.584 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:22.584 08:56:39 -- common/autotest_common.sh@960 -- # wait 2115388 00:22:22.584 08:56:39 -- target/tls.sh@235 -- # killprocess 2115009 00:22:22.584 08:56:39 -- common/autotest_common.sh@936 -- # '[' -z 2115009 ']' 00:22:22.584 08:56:39 -- common/autotest_common.sh@940 -- # kill -0 2115009 00:22:22.584 08:56:39 -- common/autotest_common.sh@941 -- # uname 00:22:22.584 08:56:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:22.584 08:56:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2115009 00:22:22.584 08:56:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:22.584 08:56:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:22.584 08:56:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2115009' 00:22:22.584 killing process with pid 2115009 00:22:22.584 08:56:39 -- common/autotest_common.sh@955 -- # kill 2115009 00:22:22.584 [2024-04-26 08:56:39.760266] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:22.584 08:56:39 -- common/autotest_common.sh@960 -- # wait 2115009 00:22:22.841 08:56:39 -- target/tls.sh@238 -- # nvmfappstart 00:22:22.841 08:56:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:22.841 08:56:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:22.841 08:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:22.841 08:56:39 -- nvmf/common.sh@470 -- # nvmfpid=2115864 00:22:22.841 08:56:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:22.841 08:56:39 -- nvmf/common.sh@471 -- # waitforlisten 2115864 00:22:22.841 08:56:39 -- common/autotest_common.sh@817 -- # '[' -z 2115864 ']' 00:22:22.841 08:56:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.841 08:56:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:22.841 08:56:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.841 08:56:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:22.841 08:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:22.841 [2024-04-26 08:56:40.029631] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:22.841 [2024-04-26 08:56:40.029682] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.841 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.099 [2024-04-26 08:56:40.105393] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.099 [2024-04-26 08:56:40.173543] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.099 [2024-04-26 08:56:40.173585] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.099 [2024-04-26 08:56:40.173595] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.099 [2024-04-26 08:56:40.173604] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.099 [2024-04-26 08:56:40.173612] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.099 [2024-04-26 08:56:40.173637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.666 08:56:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:23.666 08:56:40 -- common/autotest_common.sh@850 -- # return 0 00:22:23.666 08:56:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:23.666 08:56:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:23.666 08:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:23.666 08:56:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.666 08:56:40 -- target/tls.sh@239 -- # rpc_cmd 00:22:23.666 08:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.666 08:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:23.666 [2024-04-26 08:56:40.875802] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.666 malloc0 00:22:23.666 [2024-04-26 08:56:40.904063] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:23.666 [2024-04-26 08:56:40.904271] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.925 08:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.925 08:56:40 -- target/tls.sh@252 -- # bdevperf_pid=2116128 00:22:23.925 08:56:40 -- target/tls.sh@254 -- # waitforlisten 2116128 /var/tmp/bdevperf.sock 00:22:23.925 08:56:40 -- common/autotest_common.sh@817 -- # '[' -z 2116128 ']' 00:22:23.925 08:56:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.925 08:56:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:23.925 08:56:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.925 08:56:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:23.925 08:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:23.925 08:56:40 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:23.925 [2024-04-26 08:56:40.977898] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:23.925 [2024-04-26 08:56:40.977943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116128 ] 00:22:23.925 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.926 [2024-04-26 08:56:41.047887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.926 [2024-04-26 08:56:41.118942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.863 08:56:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:24.863 08:56:41 -- common/autotest_common.sh@850 -- # return 0 00:22:24.863 08:56:41 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xOzX8qhDS4 00:22:24.863 08:56:41 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:24.863 [2024-04-26 08:56:42.058336] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:25.122 nvme0n1 00:22:25.122 08:56:42 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:25.122 Running I/O for 1 seconds... 00:22:26.504 00:22:26.504 Latency(us) 00:22:26.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.504 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:26.504 Verification LBA range: start 0x0 length 0x2000 00:22:26.504 nvme0n1 : 1.08 1318.27 5.15 0.00 0.00 94410.89 6920.60 130862.28 00:22:26.504 =================================================================================================================== 00:22:26.504 Total : 1318.27 5.15 0.00 0.00 94410.89 6920.60 130862.28 00:22:26.504 0 00:22:26.504 08:56:43 -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:26.504 08:56:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.504 08:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:26.504 08:56:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.504 08:56:43 -- target/tls.sh@263 -- # tgtcfg='{ 00:22:26.504 "subsystems": [ 00:22:26.504 { 00:22:26.504 "subsystem": "keyring", 00:22:26.504 "config": [ 00:22:26.504 { 00:22:26.504 "method": "keyring_file_add_key", 00:22:26.504 "params": { 00:22:26.504 "name": "key0", 00:22:26.504 "path": "/tmp/tmp.xOzX8qhDS4" 00:22:26.504 } 00:22:26.504 } 00:22:26.504 ] 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "subsystem": "iobuf", 00:22:26.504 "config": [ 00:22:26.504 { 00:22:26.504 "method": "iobuf_set_options", 00:22:26.504 "params": { 00:22:26.504 "small_pool_count": 8192, 00:22:26.504 "large_pool_count": 1024, 00:22:26.504 "small_bufsize": 8192, 00:22:26.504 "large_bufsize": 135168 00:22:26.504 } 00:22:26.504 } 00:22:26.504 ] 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "subsystem": "sock", 00:22:26.504 "config": [ 00:22:26.504 { 00:22:26.504 "method": "sock_impl_set_options", 00:22:26.504 "params": { 00:22:26.504 "impl_name": "posix", 00:22:26.504 "recv_buf_size": 2097152, 00:22:26.504 "send_buf_size": 2097152, 00:22:26.504 "enable_recv_pipe": true, 00:22:26.504 "enable_quickack": false, 00:22:26.504 "enable_placement_id": 0, 00:22:26.504 "enable_zerocopy_send_server": true, 00:22:26.504 "enable_zerocopy_send_client": false, 00:22:26.504 "zerocopy_threshold": 0, 00:22:26.504 "tls_version": 0, 00:22:26.504 "enable_ktls": false 00:22:26.504 } 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "method": "sock_impl_set_options", 00:22:26.504 "params": { 00:22:26.504 "impl_name": "ssl", 00:22:26.504 "recv_buf_size": 4096, 00:22:26.504 "send_buf_size": 4096, 00:22:26.504 "enable_recv_pipe": true, 00:22:26.504 "enable_quickack": false, 00:22:26.504 "enable_placement_id": 0, 00:22:26.504 "enable_zerocopy_send_server": true, 00:22:26.504 "enable_zerocopy_send_client": false, 00:22:26.504 "zerocopy_threshold": 0, 00:22:26.504 "tls_version": 0, 00:22:26.504 "enable_ktls": false 00:22:26.504 } 00:22:26.504 } 00:22:26.504 ] 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "subsystem": "vmd", 00:22:26.504 "config": [] 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "subsystem": "accel", 00:22:26.504 "config": [ 00:22:26.504 { 00:22:26.504 "method": "accel_set_options", 00:22:26.504 "params": { 00:22:26.504 "small_cache_size": 128, 00:22:26.504 "large_cache_size": 16, 00:22:26.504 "task_count": 2048, 00:22:26.504 "sequence_count": 2048, 00:22:26.504 "buf_count": 2048 00:22:26.504 } 00:22:26.504 } 00:22:26.504 ] 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "subsystem": "bdev", 00:22:26.504 "config": [ 00:22:26.504 { 00:22:26.504 "method": "bdev_set_options", 00:22:26.504 "params": { 00:22:26.504 "bdev_io_pool_size": 65535, 00:22:26.504 "bdev_io_cache_size": 256, 00:22:26.504 "bdev_auto_examine": true, 00:22:26.504 "iobuf_small_cache_size": 128, 00:22:26.504 "iobuf_large_cache_size": 16 00:22:26.504 } 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "method": "bdev_raid_set_options", 00:22:26.504 "params": { 00:22:26.504 "process_window_size_kb": 1024 00:22:26.504 } 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "method": "bdev_iscsi_set_options", 00:22:26.504 "params": { 00:22:26.504 "timeout_sec": 30 00:22:26.504 } 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "method": "bdev_nvme_set_options", 00:22:26.504 "params": { 00:22:26.504 "action_on_timeout": "none", 00:22:26.504 "timeout_us": 0, 00:22:26.504 "timeout_admin_us": 0, 00:22:26.504 "keep_alive_timeout_ms": 10000, 00:22:26.504 "arbitration_burst": 0, 00:22:26.504 "low_priority_weight": 0, 00:22:26.504 "medium_priority_weight": 0, 00:22:26.504 "high_priority_weight": 0, 00:22:26.504 "nvme_adminq_poll_period_us": 10000, 00:22:26.504 "nvme_ioq_poll_period_us": 0, 00:22:26.504 "io_queue_requests": 0, 00:22:26.504 "delay_cmd_submit": true, 00:22:26.504 "transport_retry_count": 4, 00:22:26.504 "bdev_retry_count": 3, 00:22:26.504 "transport_ack_timeout": 0, 00:22:26.504 "ctrlr_loss_timeout_sec": 0, 00:22:26.504 "reconnect_delay_sec": 0, 00:22:26.504 "fast_io_fail_timeout_sec": 0, 00:22:26.504 "disable_auto_failback": false, 00:22:26.504 "generate_uuids": false, 00:22:26.504 "transport_tos": 0, 00:22:26.504 "nvme_error_stat": false, 00:22:26.504 "rdma_srq_size": 0, 00:22:26.504 "io_path_stat": false, 00:22:26.504 "allow_accel_sequence": false, 00:22:26.504 "rdma_max_cq_size": 0, 00:22:26.504 "rdma_cm_event_timeout_ms": 0, 00:22:26.504 "dhchap_digests": [ 00:22:26.504 "sha256", 00:22:26.504 "sha384", 00:22:26.504 "sha512" 00:22:26.504 ], 00:22:26.504 "dhchap_dhgroups": [ 00:22:26.504 "null", 00:22:26.504 "ffdhe2048", 00:22:26.504 "ffdhe3072", 00:22:26.504 "ffdhe4096", 00:22:26.504 "ffdhe6144", 00:22:26.504 "ffdhe8192" 00:22:26.504 ] 00:22:26.504 } 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "method": "bdev_nvme_set_hotplug", 00:22:26.504 "params": { 00:22:26.504 "period_us": 100000, 00:22:26.504 "enable": false 00:22:26.504 } 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "method": "bdev_malloc_create", 00:22:26.504 "params": { 00:22:26.504 "name": "malloc0", 00:22:26.504 "num_blocks": 8192, 00:22:26.504 "block_size": 4096, 00:22:26.504 "physical_block_size": 4096, 00:22:26.504 "uuid": "68f4b1d2-6b43-4da6-9dc5-7511519d93ef", 00:22:26.505 "optimal_io_boundary": 0 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "bdev_wait_for_examine" 00:22:26.505 } 00:22:26.505 ] 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "subsystem": "nbd", 00:22:26.505 "config": [] 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "subsystem": "scheduler", 00:22:26.505 "config": [ 00:22:26.505 { 00:22:26.505 "method": "framework_set_scheduler", 00:22:26.505 "params": { 00:22:26.505 "name": "static" 00:22:26.505 } 00:22:26.505 } 00:22:26.505 ] 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "subsystem": "nvmf", 00:22:26.505 "config": [ 00:22:26.505 { 00:22:26.505 "method": "nvmf_set_config", 00:22:26.505 "params": { 00:22:26.505 "discovery_filter": "match_any", 00:22:26.505 "admin_cmd_passthru": { 00:22:26.505 "identify_ctrlr": false 00:22:26.505 } 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "nvmf_set_max_subsystems", 00:22:26.505 "params": { 00:22:26.505 "max_subsystems": 1024 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "nvmf_set_crdt", 00:22:26.505 "params": { 00:22:26.505 "crdt1": 0, 00:22:26.505 "crdt2": 0, 00:22:26.505 "crdt3": 0 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "nvmf_create_transport", 00:22:26.505 "params": { 00:22:26.505 "trtype": "TCP", 00:22:26.505 "max_queue_depth": 128, 00:22:26.505 "max_io_qpairs_per_ctrlr": 127, 00:22:26.505 "in_capsule_data_size": 4096, 00:22:26.505 "max_io_size": 131072, 00:22:26.505 "io_unit_size": 131072, 00:22:26.505 "max_aq_depth": 128, 00:22:26.505 "num_shared_buffers": 511, 00:22:26.505 "buf_cache_size": 4294967295, 00:22:26.505 "dif_insert_or_strip": false, 00:22:26.505 "zcopy": false, 00:22:26.505 "c2h_success": false, 00:22:26.505 "sock_priority": 0, 00:22:26.505 "abort_timeout_sec": 1, 00:22:26.505 "ack_timeout": 0, 00:22:26.505 "data_wr_pool_size": 0 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "nvmf_create_subsystem", 00:22:26.505 "params": { 00:22:26.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.505 "allow_any_host": false, 00:22:26.505 "serial_number": "00000000000000000000", 00:22:26.505 "model_number": "SPDK bdev Controller", 00:22:26.505 "max_namespaces": 32, 00:22:26.505 "min_cntlid": 1, 00:22:26.505 "max_cntlid": 65519, 00:22:26.505 "ana_reporting": false 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "nvmf_subsystem_add_host", 00:22:26.505 "params": { 00:22:26.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.505 "host": "nqn.2016-06.io.spdk:host1", 00:22:26.505 "psk": "key0" 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "nvmf_subsystem_add_ns", 00:22:26.505 "params": { 00:22:26.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.505 "namespace": { 00:22:26.505 "nsid": 1, 00:22:26.505 "bdev_name": "malloc0", 00:22:26.505 "nguid": "68F4B1D26B434DA69DC57511519D93EF", 00:22:26.505 "uuid": "68f4b1d2-6b43-4da6-9dc5-7511519d93ef", 00:22:26.505 "no_auto_visible": false 00:22:26.505 } 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "nvmf_subsystem_add_listener", 00:22:26.505 "params": { 00:22:26.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.505 "listen_address": { 00:22:26.505 "trtype": "TCP", 00:22:26.505 "adrfam": "IPv4", 00:22:26.505 "traddr": "10.0.0.2", 00:22:26.505 "trsvcid": "4420" 00:22:26.505 }, 00:22:26.505 "secure_channel": true 00:22:26.505 } 00:22:26.505 } 00:22:26.505 ] 00:22:26.505 } 00:22:26.505 ] 00:22:26.505 }' 00:22:26.505 08:56:43 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:26.505 08:56:43 -- target/tls.sh@264 -- # bperfcfg='{ 00:22:26.505 "subsystems": [ 00:22:26.505 { 00:22:26.505 "subsystem": "keyring", 00:22:26.505 "config": [ 00:22:26.505 { 00:22:26.505 "method": "keyring_file_add_key", 00:22:26.505 "params": { 00:22:26.505 "name": "key0", 00:22:26.505 "path": "/tmp/tmp.xOzX8qhDS4" 00:22:26.505 } 00:22:26.505 } 00:22:26.505 ] 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "subsystem": "iobuf", 00:22:26.505 "config": [ 00:22:26.505 { 00:22:26.505 "method": "iobuf_set_options", 00:22:26.505 "params": { 00:22:26.505 "small_pool_count": 8192, 00:22:26.505 "large_pool_count": 1024, 00:22:26.505 "small_bufsize": 8192, 00:22:26.505 "large_bufsize": 135168 00:22:26.505 } 00:22:26.505 } 00:22:26.505 ] 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "subsystem": "sock", 00:22:26.505 "config": [ 00:22:26.505 { 00:22:26.505 "method": "sock_impl_set_options", 00:22:26.505 "params": { 00:22:26.505 "impl_name": "posix", 00:22:26.505 "recv_buf_size": 2097152, 00:22:26.505 "send_buf_size": 2097152, 00:22:26.505 "enable_recv_pipe": true, 00:22:26.505 "enable_quickack": false, 00:22:26.505 "enable_placement_id": 0, 00:22:26.505 "enable_zerocopy_send_server": true, 00:22:26.505 "enable_zerocopy_send_client": false, 00:22:26.505 "zerocopy_threshold": 0, 00:22:26.505 "tls_version": 0, 00:22:26.505 "enable_ktls": false 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "sock_impl_set_options", 00:22:26.505 "params": { 00:22:26.505 "impl_name": "ssl", 00:22:26.505 "recv_buf_size": 4096, 00:22:26.505 "send_buf_size": 4096, 00:22:26.505 "enable_recv_pipe": true, 00:22:26.505 "enable_quickack": false, 00:22:26.505 "enable_placement_id": 0, 00:22:26.505 "enable_zerocopy_send_server": true, 00:22:26.505 "enable_zerocopy_send_client": false, 00:22:26.505 "zerocopy_threshold": 0, 00:22:26.505 "tls_version": 0, 00:22:26.505 "enable_ktls": false 00:22:26.505 } 00:22:26.505 } 00:22:26.505 ] 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "subsystem": "vmd", 00:22:26.505 "config": [] 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "subsystem": "accel", 00:22:26.505 "config": [ 00:22:26.505 { 00:22:26.505 "method": "accel_set_options", 00:22:26.505 "params": { 00:22:26.505 "small_cache_size": 128, 00:22:26.505 "large_cache_size": 16, 00:22:26.505 "task_count": 2048, 00:22:26.505 "sequence_count": 2048, 00:22:26.505 "buf_count": 2048 00:22:26.505 } 00:22:26.505 } 00:22:26.505 ] 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "subsystem": "bdev", 00:22:26.505 "config": [ 00:22:26.505 { 00:22:26.505 "method": "bdev_set_options", 00:22:26.505 "params": { 00:22:26.505 "bdev_io_pool_size": 65535, 00:22:26.505 "bdev_io_cache_size": 256, 00:22:26.505 "bdev_auto_examine": true, 00:22:26.505 "iobuf_small_cache_size": 128, 00:22:26.505 "iobuf_large_cache_size": 16 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "bdev_raid_set_options", 00:22:26.505 "params": { 00:22:26.505 "process_window_size_kb": 1024 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "bdev_iscsi_set_options", 00:22:26.505 "params": { 00:22:26.505 "timeout_sec": 30 00:22:26.505 } 00:22:26.505 }, 00:22:26.505 { 00:22:26.505 "method": "bdev_nvme_set_options", 00:22:26.505 "params": { 00:22:26.505 "action_on_timeout": "none", 00:22:26.505 "timeout_us": 0, 00:22:26.505 "timeout_admin_us": 0, 00:22:26.505 "keep_alive_timeout_ms": 10000, 00:22:26.505 "arbitration_burst": 0, 00:22:26.505 "low_priority_weight": 0, 00:22:26.505 "medium_priority_weight": 0, 00:22:26.505 "high_priority_weight": 0, 00:22:26.505 "nvme_adminq_poll_period_us": 10000, 00:22:26.505 "nvme_ioq_poll_period_us": 0, 00:22:26.505 "io_queue_requests": 512, 00:22:26.505 "delay_cmd_submit": true, 00:22:26.505 "transport_retry_count": 4, 00:22:26.505 "bdev_retry_count": 3, 00:22:26.505 "transport_ack_timeout": 0, 00:22:26.505 "ctrlr_loss_timeout_sec": 0, 00:22:26.505 "reconnect_delay_sec": 0, 00:22:26.505 "fast_io_fail_timeout_sec": 0, 00:22:26.505 "disable_auto_failback": false, 00:22:26.505 "generate_uuids": false, 00:22:26.505 "transport_tos": 0, 00:22:26.505 "nvme_error_stat": false, 00:22:26.505 "rdma_srq_size": 0, 00:22:26.505 "io_path_stat": false, 00:22:26.506 "allow_accel_sequence": false, 00:22:26.506 "rdma_max_cq_size": 0, 00:22:26.506 "rdma_cm_event_timeout_ms": 0, 00:22:26.506 "dhchap_digests": [ 00:22:26.506 "sha256", 00:22:26.506 "sha384", 00:22:26.506 "sha512" 00:22:26.506 ], 00:22:26.506 "dhchap_dhgroups": [ 00:22:26.506 "null", 00:22:26.506 "ffdhe2048", 00:22:26.506 "ffdhe3072", 00:22:26.506 "ffdhe4096", 00:22:26.506 "ffdhe6144", 00:22:26.506 "ffdhe8192" 00:22:26.506 ] 00:22:26.506 } 00:22:26.506 }, 00:22:26.506 { 00:22:26.506 "method": "bdev_nvme_attach_controller", 00:22:26.506 "params": { 00:22:26.506 "name": "nvme0", 00:22:26.506 "trtype": "TCP", 00:22:26.506 "adrfam": "IPv4", 00:22:26.506 "traddr": "10.0.0.2", 00:22:26.506 "trsvcid": "4420", 00:22:26.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.506 "prchk_reftag": false, 00:22:26.506 "prchk_guard": false, 00:22:26.506 "ctrlr_loss_timeout_sec": 0, 00:22:26.506 "reconnect_delay_sec": 0, 00:22:26.506 "fast_io_fail_timeout_sec": 0, 00:22:26.506 "psk": "key0", 00:22:26.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.506 "hdgst": false, 00:22:26.506 "ddgst": false 00:22:26.506 } 00:22:26.506 }, 00:22:26.506 { 00:22:26.506 "method": "bdev_nvme_set_hotplug", 00:22:26.506 "params": { 00:22:26.506 "period_us": 100000, 00:22:26.506 "enable": false 00:22:26.506 } 00:22:26.506 }, 00:22:26.506 { 00:22:26.506 "method": "bdev_enable_histogram", 00:22:26.506 "params": { 00:22:26.506 "name": "nvme0n1", 00:22:26.506 "enable": true 00:22:26.506 } 00:22:26.506 }, 00:22:26.506 { 00:22:26.506 "method": "bdev_wait_for_examine" 00:22:26.506 } 00:22:26.506 ] 00:22:26.506 }, 00:22:26.506 { 00:22:26.506 "subsystem": "nbd", 00:22:26.506 "config": [] 00:22:26.506 } 00:22:26.506 ] 00:22:26.506 }' 00:22:26.506 08:56:43 -- target/tls.sh@266 -- # killprocess 2116128 00:22:26.506 08:56:43 -- common/autotest_common.sh@936 -- # '[' -z 2116128 ']' 00:22:26.506 08:56:43 -- common/autotest_common.sh@940 -- # kill -0 2116128 00:22:26.506 08:56:43 -- common/autotest_common.sh@941 -- # uname 00:22:26.506 08:56:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:26.506 08:56:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2116128 00:22:26.506 08:56:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:26.506 08:56:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:26.506 08:56:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2116128' 00:22:26.506 killing process with pid 2116128 00:22:26.506 08:56:43 -- common/autotest_common.sh@955 -- # kill 2116128 00:22:26.506 Received shutdown signal, test time was about 1.000000 seconds 00:22:26.506 00:22:26.506 Latency(us) 00:22:26.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.506 =================================================================================================================== 00:22:26.506 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.506 08:56:43 -- common/autotest_common.sh@960 -- # wait 2116128 00:22:26.765 08:56:43 -- target/tls.sh@267 -- # killprocess 2115864 00:22:26.765 08:56:43 -- common/autotest_common.sh@936 -- # '[' -z 2115864 ']' 00:22:26.765 08:56:43 -- common/autotest_common.sh@940 -- # kill -0 2115864 00:22:26.765 08:56:43 -- common/autotest_common.sh@941 -- # uname 00:22:26.765 08:56:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:26.765 08:56:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2115864 00:22:26.765 08:56:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:26.765 08:56:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:26.765 08:56:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2115864' 00:22:26.765 killing process with pid 2115864 00:22:26.765 08:56:43 -- common/autotest_common.sh@955 -- # kill 2115864 00:22:26.765 08:56:43 -- common/autotest_common.sh@960 -- # wait 2115864 00:22:27.027 08:56:44 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:27.028 08:56:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:27.028 08:56:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:27.028 08:56:44 -- common/autotest_common.sh@10 -- # set +x 00:22:27.028 08:56:44 -- target/tls.sh@269 -- # echo '{ 00:22:27.028 "subsystems": [ 00:22:27.028 { 00:22:27.028 "subsystem": "keyring", 00:22:27.028 "config": [ 00:22:27.028 { 00:22:27.028 "method": "keyring_file_add_key", 00:22:27.028 "params": { 00:22:27.028 "name": "key0", 00:22:27.028 "path": "/tmp/tmp.xOzX8qhDS4" 00:22:27.028 } 00:22:27.028 } 00:22:27.028 ] 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "subsystem": "iobuf", 00:22:27.028 "config": [ 00:22:27.028 { 00:22:27.028 "method": "iobuf_set_options", 00:22:27.028 "params": { 00:22:27.028 "small_pool_count": 8192, 00:22:27.028 "large_pool_count": 1024, 00:22:27.028 "small_bufsize": 8192, 00:22:27.028 "large_bufsize": 135168 00:22:27.028 } 00:22:27.028 } 00:22:27.028 ] 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "subsystem": "sock", 00:22:27.028 "config": [ 00:22:27.028 { 00:22:27.028 "method": "sock_impl_set_options", 00:22:27.028 "params": { 00:22:27.028 "impl_name": "posix", 00:22:27.028 "recv_buf_size": 2097152, 00:22:27.028 "send_buf_size": 2097152, 00:22:27.028 "enable_recv_pipe": true, 00:22:27.028 "enable_quickack": false, 00:22:27.028 "enable_placement_id": 0, 00:22:27.028 "enable_zerocopy_send_server": true, 00:22:27.028 "enable_zerocopy_send_client": false, 00:22:27.028 "zerocopy_threshold": 0, 00:22:27.028 "tls_version": 0, 00:22:27.028 "enable_ktls": false 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "sock_impl_set_options", 00:22:27.028 "params": { 00:22:27.028 "impl_name": "ssl", 00:22:27.028 "recv_buf_size": 4096, 00:22:27.028 "send_buf_size": 4096, 00:22:27.028 "enable_recv_pipe": true, 00:22:27.028 "enable_quickack": false, 00:22:27.028 "enable_placement_id": 0, 00:22:27.028 "enable_zerocopy_send_server": true, 00:22:27.028 "enable_zerocopy_send_client": false, 00:22:27.028 "zerocopy_threshold": 0, 00:22:27.028 "tls_version": 0, 00:22:27.028 "enable_ktls": false 00:22:27.028 } 00:22:27.028 } 00:22:27.028 ] 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "subsystem": "vmd", 00:22:27.028 "config": [] 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "subsystem": "accel", 00:22:27.028 "config": [ 00:22:27.028 { 00:22:27.028 "method": "accel_set_options", 00:22:27.028 "params": { 00:22:27.028 "small_cache_size": 128, 00:22:27.028 "large_cache_size": 16, 00:22:27.028 "task_count": 2048, 00:22:27.028 "sequence_count": 2048, 00:22:27.028 "buf_count": 2048 00:22:27.028 } 00:22:27.028 } 00:22:27.028 ] 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "subsystem": "bdev", 00:22:27.028 "config": [ 00:22:27.028 { 00:22:27.028 "method": "bdev_set_options", 00:22:27.028 "params": { 00:22:27.028 "bdev_io_pool_size": 65535, 00:22:27.028 "bdev_io_cache_size": 256, 00:22:27.028 "bdev_auto_examine": true, 00:22:27.028 "iobuf_small_cache_size": 128, 00:22:27.028 "iobuf_large_cache_size": 16 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "bdev_raid_set_options", 00:22:27.028 "params": { 00:22:27.028 "process_window_size_kb": 1024 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "bdev_iscsi_set_options", 00:22:27.028 "params": { 00:22:27.028 "timeout_sec": 30 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "bdev_nvme_set_options", 00:22:27.028 "params": { 00:22:27.028 "action_on_timeout": "none", 00:22:27.028 "timeout_us": 0, 00:22:27.028 "timeout_admin_us": 0, 00:22:27.028 "keep_alive_timeout_ms": 10000, 00:22:27.028 "arbitration_burst": 0, 00:22:27.028 "low_priority_weight": 0, 00:22:27.028 "medium_priority_weight": 0, 00:22:27.028 "high_priority_weight": 0, 00:22:27.028 "nvme_adminq_poll_period_us": 10000, 00:22:27.028 "nvme_ioq_poll_period_us": 0, 00:22:27.028 "io_queue_requests": 0, 00:22:27.028 "delay_cmd_submit": true, 00:22:27.028 "transport_retry_count": 4, 00:22:27.028 "bdev_retry_count": 3, 00:22:27.028 "transport_ack_timeout": 0, 00:22:27.028 "ctrlr_loss_timeout_sec": 0, 00:22:27.028 "reconnect_delay_sec": 0, 00:22:27.028 "fast_io_fail_timeout_sec": 0, 00:22:27.028 "disable_auto_failback": false, 00:22:27.028 "generate_uuids": false, 00:22:27.028 "transport_tos": 0, 00:22:27.028 "nvme_error_stat": false, 00:22:27.028 "rdma_srq_size": 0, 00:22:27.028 "io_path_stat": false, 00:22:27.028 "allow_accel_sequence": false, 00:22:27.028 "rdma_max_cq_size": 0, 00:22:27.028 "rdma_cm_event_timeout_ms": 0, 00:22:27.028 "dhchap_digests": [ 00:22:27.028 "sha256", 00:22:27.028 "sha384", 00:22:27.028 "sha512" 00:22:27.028 ], 00:22:27.028 "dhchap_dhgroups": [ 00:22:27.028 "null", 00:22:27.028 "ffdhe2048", 00:22:27.028 "ffdhe3072", 00:22:27.028 "ffdhe4096", 00:22:27.028 "ffdhe6144", 00:22:27.028 "ffdhe8192" 00:22:27.028 ] 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "bdev_nvme_set_hotplug", 00:22:27.028 "params": { 00:22:27.028 "period_us": 100000, 00:22:27.028 "enable": false 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "bdev_malloc_create", 00:22:27.028 "params": { 00:22:27.028 "name": "malloc0", 00:22:27.028 "num_blocks": 8192, 00:22:27.028 "block_size": 4096, 00:22:27.028 "physical_block_size": 4096, 00:22:27.028 "uuid": "68f4b1d2-6b43-4da6-9dc5-7511519d93ef", 00:22:27.028 "optimal_io_boundary": 0 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "bdev_wait_for_examine" 00:22:27.028 } 00:22:27.028 ] 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "subsystem": "nbd", 00:22:27.028 "config": [] 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "subsystem": "scheduler", 00:22:27.028 "config": [ 00:22:27.028 { 00:22:27.028 "method": "framework_set_scheduler", 00:22:27.028 "params": { 00:22:27.028 "name": "static" 00:22:27.028 } 00:22:27.028 } 00:22:27.028 ] 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "subsystem": "nvmf", 00:22:27.028 "config": [ 00:22:27.028 { 00:22:27.028 "method": "nvmf_set_config", 00:22:27.028 "params": { 00:22:27.028 "discovery_filter": "match_any", 00:22:27.028 "admin_cmd_passthru": { 00:22:27.028 "identify_ctrlr": false 00:22:27.028 } 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "nvmf_set_max_subsystems", 00:22:27.028 "params": { 00:22:27.028 "max_subsystems": 1024 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "nvmf_set_crdt", 00:22:27.028 "params": { 00:22:27.028 "crdt1": 0, 00:22:27.028 "crdt2": 0, 00:22:27.028 "crdt3": 0 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "nvmf_create_transport", 00:22:27.028 "params": { 00:22:27.028 "trtype": "TCP", 00:22:27.028 "max_queue_depth": 128, 00:22:27.028 "max_io_qpairs_per_ctrlr": 127, 00:22:27.028 "in_capsule_data_size": 4096, 00:22:27.028 "max_io_size": 131072, 00:22:27.028 "io_unit_size": 131072, 00:22:27.028 "max_aq_depth": 128, 00:22:27.028 "num_shared_buffers": 511, 00:22:27.028 "buf_cache_size": 4294967295, 00:22:27.028 "dif_insert_or_strip": false, 00:22:27.028 "zcopy": false, 00:22:27.028 "c2h_success": false, 00:22:27.028 "sock_priority": 0, 00:22:27.028 "abort_timeout_sec": 1, 00:22:27.028 "ack_timeout": 0, 00:22:27.028 "data_wr_pool_size": 0 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "nvmf_create_subsystem", 00:22:27.028 "params": { 00:22:27.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.028 "allow_any_host": false, 00:22:27.028 "serial_number": "00000000000000000000", 00:22:27.028 "model_number": "SPDK bdev Controller", 00:22:27.028 "max_namespaces": 32, 00:22:27.028 "min_cntlid": 1, 00:22:27.028 "max_cntlid": 65519, 00:22:27.028 "ana_reporting": false 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "nvmf_subsystem_add_host", 00:22:27.028 "params": { 00:22:27.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.028 "host": "nqn.2016-06.io.spdk:host1", 00:22:27.028 "psk": "key0" 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "nvmf_subsystem_add_ns", 00:22:27.028 "params": { 00:22:27.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.028 "namespace": { 00:22:27.028 "nsid": 1, 00:22:27.028 "bdev_name": "malloc0", 00:22:27.028 "nguid": "68F4B1D26B434DA69DC57511519D93EF", 00:22:27.028 "uuid": "68f4b1d2-6b43-4da6-9dc5-7511519d93ef", 00:22:27.028 "no_auto_visible": false 00:22:27.028 } 00:22:27.028 } 00:22:27.028 }, 00:22:27.028 { 00:22:27.028 "method": "nvmf_subsystem_add_listener", 00:22:27.028 "params": { 00:22:27.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.028 "listen_address": { 00:22:27.029 "trtype": "TCP", 00:22:27.029 "adrfam": "IPv4", 00:22:27.029 "traddr": "10.0.0.2", 00:22:27.029 "trsvcid": "4420" 00:22:27.029 }, 00:22:27.029 "secure_channel": true 00:22:27.029 } 00:22:27.029 } 00:22:27.029 ] 00:22:27.029 } 00:22:27.029 ] 00:22:27.029 }' 00:22:27.029 08:56:44 -- nvmf/common.sh@470 -- # nvmfpid=2116681 00:22:27.029 08:56:44 -- nvmf/common.sh@471 -- # waitforlisten 2116681 00:22:27.029 08:56:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:27.029 08:56:44 -- common/autotest_common.sh@817 -- # '[' -z 2116681 ']' 00:22:27.029 08:56:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.029 08:56:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:27.029 08:56:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.029 08:56:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:27.029 08:56:44 -- common/autotest_common.sh@10 -- # set +x 00:22:27.029 [2024-04-26 08:56:44.235688] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:27.029 [2024-04-26 08:56:44.235737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.029 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.295 [2024-04-26 08:56:44.309066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.295 [2024-04-26 08:56:44.380392] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.295 [2024-04-26 08:56:44.380429] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.295 [2024-04-26 08:56:44.380439] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.295 [2024-04-26 08:56:44.380448] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.295 [2024-04-26 08:56:44.380462] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.295 [2024-04-26 08:56:44.380521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.554 [2024-04-26 08:56:44.581676] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.554 [2024-04-26 08:56:44.613711] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:27.554 [2024-04-26 08:56:44.622813] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.813 08:56:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:27.813 08:56:45 -- common/autotest_common.sh@850 -- # return 0 00:22:27.813 08:56:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:27.813 08:56:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:27.813 08:56:45 -- common/autotest_common.sh@10 -- # set +x 00:22:28.073 08:56:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.073 08:56:45 -- target/tls.sh@272 -- # bdevperf_pid=2116911 00:22:28.073 08:56:45 -- target/tls.sh@273 -- # waitforlisten 2116911 /var/tmp/bdevperf.sock 00:22:28.073 08:56:45 -- common/autotest_common.sh@817 -- # '[' -z 2116911 ']' 00:22:28.073 08:56:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.073 08:56:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:28.073 08:56:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.073 08:56:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:28.073 08:56:45 -- common/autotest_common.sh@10 -- # set +x 00:22:28.073 08:56:45 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:28.073 08:56:45 -- target/tls.sh@270 -- # echo '{ 00:22:28.073 "subsystems": [ 00:22:28.073 { 00:22:28.073 "subsystem": "keyring", 00:22:28.073 "config": [ 00:22:28.073 { 00:22:28.073 "method": "keyring_file_add_key", 00:22:28.073 "params": { 00:22:28.073 "name": "key0", 00:22:28.073 "path": "/tmp/tmp.xOzX8qhDS4" 00:22:28.073 } 00:22:28.073 } 00:22:28.073 ] 00:22:28.073 }, 00:22:28.073 { 00:22:28.073 "subsystem": "iobuf", 00:22:28.073 "config": [ 00:22:28.073 { 00:22:28.073 "method": "iobuf_set_options", 00:22:28.073 "params": { 00:22:28.073 "small_pool_count": 8192, 00:22:28.073 "large_pool_count": 1024, 00:22:28.073 "small_bufsize": 8192, 00:22:28.073 "large_bufsize": 135168 00:22:28.073 } 00:22:28.073 } 00:22:28.073 ] 00:22:28.073 }, 00:22:28.073 { 00:22:28.073 "subsystem": "sock", 00:22:28.073 "config": [ 00:22:28.073 { 00:22:28.073 "method": "sock_impl_set_options", 00:22:28.073 "params": { 00:22:28.073 "impl_name": "posix", 00:22:28.073 "recv_buf_size": 2097152, 00:22:28.073 "send_buf_size": 2097152, 00:22:28.073 "enable_recv_pipe": true, 00:22:28.073 "enable_quickack": false, 00:22:28.073 "enable_placement_id": 0, 00:22:28.073 "enable_zerocopy_send_server": true, 00:22:28.073 "enable_zerocopy_send_client": false, 00:22:28.073 "zerocopy_threshold": 0, 00:22:28.073 "tls_version": 0, 00:22:28.073 "enable_ktls": false 00:22:28.073 } 00:22:28.073 }, 00:22:28.073 { 00:22:28.073 "method": "sock_impl_set_options", 00:22:28.073 "params": { 00:22:28.073 "impl_name": "ssl", 00:22:28.073 "recv_buf_size": 4096, 00:22:28.073 "send_buf_size": 4096, 00:22:28.073 "enable_recv_pipe": true, 00:22:28.073 "enable_quickack": false, 00:22:28.073 "enable_placement_id": 0, 00:22:28.073 "enable_zerocopy_send_server": true, 00:22:28.073 "enable_zerocopy_send_client": false, 00:22:28.073 "zerocopy_threshold": 0, 00:22:28.073 "tls_version": 0, 00:22:28.073 "enable_ktls": false 00:22:28.073 } 00:22:28.073 } 00:22:28.073 ] 00:22:28.073 }, 00:22:28.073 { 00:22:28.073 "subsystem": "vmd", 00:22:28.073 "config": [] 00:22:28.073 }, 00:22:28.073 { 00:22:28.073 "subsystem": "accel", 00:22:28.073 "config": [ 00:22:28.073 { 00:22:28.073 "method": "accel_set_options", 00:22:28.073 "params": { 00:22:28.073 "small_cache_size": 128, 00:22:28.073 "large_cache_size": 16, 00:22:28.073 "task_count": 2048, 00:22:28.073 "sequence_count": 2048, 00:22:28.073 "buf_count": 2048 00:22:28.073 } 00:22:28.073 } 00:22:28.073 ] 00:22:28.073 }, 00:22:28.073 { 00:22:28.073 "subsystem": "bdev", 00:22:28.073 "config": [ 00:22:28.073 { 00:22:28.073 "method": "bdev_set_options", 00:22:28.073 "params": { 00:22:28.073 "bdev_io_pool_size": 65535, 00:22:28.073 "bdev_io_cache_size": 256, 00:22:28.074 "bdev_auto_examine": true, 00:22:28.074 "iobuf_small_cache_size": 128, 00:22:28.074 "iobuf_large_cache_size": 16 00:22:28.074 } 00:22:28.074 }, 00:22:28.074 { 00:22:28.074 "method": "bdev_raid_set_options", 00:22:28.074 "params": { 00:22:28.074 "process_window_size_kb": 1024 00:22:28.074 } 00:22:28.074 }, 00:22:28.074 { 00:22:28.074 "method": "bdev_iscsi_set_options", 00:22:28.074 "params": { 00:22:28.074 "timeout_sec": 30 00:22:28.074 } 00:22:28.074 }, 00:22:28.074 { 00:22:28.074 "method": "bdev_nvme_set_options", 00:22:28.074 "params": { 00:22:28.074 "action_on_timeout": "none", 00:22:28.074 "timeout_us": 0, 00:22:28.074 "timeout_admin_us": 0, 00:22:28.074 "keep_alive_timeout_ms": 10000, 00:22:28.074 "arbitration_burst": 0, 00:22:28.074 "low_priority_weight": 0, 00:22:28.074 "medium_priority_weight": 0, 00:22:28.074 "high_priority_weight": 0, 00:22:28.074 "nvme_adminq_poll_period_us": 10000, 00:22:28.074 "nvme_ioq_poll_period_us": 0, 00:22:28.074 "io_queue_requests": 512, 00:22:28.074 "delay_cmd_submit": true, 00:22:28.074 "transport_retry_count": 4, 00:22:28.074 "bdev_retry_count": 3, 00:22:28.074 "transport_ack_timeout": 0, 00:22:28.074 "ctrlr_loss_timeout_sec": 0, 00:22:28.074 "reconnect_delay_sec": 0, 00:22:28.074 "fast_io_fail_timeout_sec": 0, 00:22:28.074 "disable_auto_failback": false, 00:22:28.074 "generate_uuids": false, 00:22:28.074 "transport_tos": 0, 00:22:28.074 "nvme_error_stat": false, 00:22:28.074 "rdma_srq_size": 0, 00:22:28.074 "io_path_stat": false, 00:22:28.074 "allow_accel_sequence": false, 00:22:28.074 "rdma_max_cq_size": 0, 00:22:28.074 "rdma_cm_event_timeout_ms": 0, 00:22:28.074 "dhchap_digests": [ 00:22:28.074 "sha256", 00:22:28.074 "sha384", 00:22:28.074 "sha512" 00:22:28.074 ], 00:22:28.074 "dhchap_dhgroups": [ 00:22:28.074 "null", 00:22:28.074 "ffdhe2048", 00:22:28.074 "ffdhe3072", 00:22:28.074 "ffdhe4096", 00:22:28.074 "ffdhe6144", 00:22:28.074 "ffdhe8192" 00:22:28.074 ] 00:22:28.074 } 00:22:28.074 }, 00:22:28.074 { 00:22:28.074 "method": "bdev_nvme_attach_controller", 00:22:28.074 "params": { 00:22:28.074 "name": "nvme0", 00:22:28.074 "trtype": "TCP", 00:22:28.074 "adrfam": "IPv4", 00:22:28.074 "traddr": "10.0.0.2", 00:22:28.074 "trsvcid": "4420", 00:22:28.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.074 "prchk_reftag": false, 00:22:28.074 "prchk_guard": false, 00:22:28.074 "ctrlr_loss_timeout_sec": 0, 00:22:28.074 "reconnect_delay_sec": 0, 00:22:28.074 "fast_io_fail_timeout_sec": 0, 00:22:28.074 "psk": "key0", 00:22:28.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.074 "hdgst": false, 00:22:28.074 "ddgst": false 00:22:28.074 } 00:22:28.074 }, 00:22:28.074 { 00:22:28.074 "method": "bdev_nvme_set_hotplug", 00:22:28.074 "params": { 00:22:28.074 "period_us": 100000, 00:22:28.074 "enable": false 00:22:28.074 } 00:22:28.074 }, 00:22:28.074 { 00:22:28.074 "method": "bdev_enable_histogram", 00:22:28.074 "params": { 00:22:28.074 "name": "nvme0n1", 00:22:28.074 "enable": true 00:22:28.074 } 00:22:28.074 }, 00:22:28.074 { 00:22:28.074 "method": "bdev_wait_for_examine" 00:22:28.074 } 00:22:28.074 ] 00:22:28.074 }, 00:22:28.074 { 00:22:28.074 "subsystem": "nbd", 00:22:28.074 "config": [] 00:22:28.074 } 00:22:28.074 ] 00:22:28.074 }' 00:22:28.074 [2024-04-26 08:56:45.124496] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:28.074 [2024-04-26 08:56:45.124547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116911 ] 00:22:28.074 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.074 [2024-04-26 08:56:45.194852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.074 [2024-04-26 08:56:45.262132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.333 [2024-04-26 08:56:45.404896] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:28.902 08:56:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:28.902 08:56:45 -- common/autotest_common.sh@850 -- # return 0 00:22:28.902 08:56:45 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.902 08:56:45 -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:28.902 08:56:46 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.902 08:56:46 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:29.161 Running I/O for 1 seconds... 00:22:30.098 00:22:30.098 Latency(us) 00:22:30.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.098 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:30.098 Verification LBA range: start 0x0 length 0x2000 00:22:30.098 nvme0n1 : 1.09 1259.45 4.92 0.00 0.00 98692.82 7235.17 135895.45 00:22:30.098 =================================================================================================================== 00:22:30.098 Total : 1259.45 4.92 0.00 0.00 98692.82 7235.17 135895.45 00:22:30.098 0 00:22:30.098 08:56:47 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:30.098 08:56:47 -- target/tls.sh@279 -- # cleanup 00:22:30.098 08:56:47 -- target/tls.sh@15 -- # process_shm --id 0 00:22:30.098 08:56:47 -- common/autotest_common.sh@794 -- # type=--id 00:22:30.098 08:56:47 -- common/autotest_common.sh@795 -- # id=0 00:22:30.098 08:56:47 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:22:30.099 08:56:47 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:30.099 08:56:47 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:22:30.099 08:56:47 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:22:30.099 08:56:47 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:22:30.099 08:56:47 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:30.099 nvmf_trace.0 00:22:30.099 08:56:47 -- common/autotest_common.sh@809 -- # return 0 00:22:30.099 08:56:47 -- target/tls.sh@16 -- # killprocess 2116911 00:22:30.099 08:56:47 -- common/autotest_common.sh@936 -- # '[' -z 2116911 ']' 00:22:30.099 08:56:47 -- common/autotest_common.sh@940 -- # kill -0 2116911 00:22:30.099 08:56:47 -- common/autotest_common.sh@941 -- # uname 00:22:30.099 08:56:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.099 08:56:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2116911 00:22:30.358 08:56:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:30.358 08:56:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:30.358 08:56:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2116911' 00:22:30.358 killing process with pid 2116911 00:22:30.358 08:56:47 -- common/autotest_common.sh@955 -- # kill 2116911 00:22:30.358 Received shutdown signal, test time was about 1.000000 seconds 00:22:30.358 00:22:30.358 Latency(us) 00:22:30.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.358 =================================================================================================================== 00:22:30.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.358 08:56:47 -- common/autotest_common.sh@960 -- # wait 2116911 00:22:30.358 08:56:47 -- target/tls.sh@17 -- # nvmftestfini 00:22:30.358 08:56:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:30.358 08:56:47 -- nvmf/common.sh@117 -- # sync 00:22:30.358 08:56:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:30.358 08:56:47 -- nvmf/common.sh@120 -- # set +e 00:22:30.358 08:56:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:30.358 08:56:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:30.358 rmmod nvme_tcp 00:22:30.617 rmmod nvme_fabrics 00:22:30.617 rmmod nvme_keyring 00:22:30.617 08:56:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:30.617 08:56:47 -- nvmf/common.sh@124 -- # set -e 00:22:30.617 08:56:47 -- nvmf/common.sh@125 -- # return 0 00:22:30.617 08:56:47 -- nvmf/common.sh@478 -- # '[' -n 2116681 ']' 00:22:30.617 08:56:47 -- nvmf/common.sh@479 -- # killprocess 2116681 00:22:30.617 08:56:47 -- common/autotest_common.sh@936 -- # '[' -z 2116681 ']' 00:22:30.617 08:56:47 -- common/autotest_common.sh@940 -- # kill -0 2116681 00:22:30.617 08:56:47 -- common/autotest_common.sh@941 -- # uname 00:22:30.617 08:56:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.617 08:56:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2116681 00:22:30.617 08:56:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:30.618 08:56:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:30.618 08:56:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2116681' 00:22:30.618 killing process with pid 2116681 00:22:30.618 08:56:47 -- common/autotest_common.sh@955 -- # kill 2116681 00:22:30.618 08:56:47 -- common/autotest_common.sh@960 -- # wait 2116681 00:22:30.877 08:56:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:30.877 08:56:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:30.877 08:56:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:30.877 08:56:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.877 08:56:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:30.877 08:56:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.878 08:56:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.878 08:56:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.788 08:56:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:32.788 08:56:49 -- target/tls.sh@18 -- # rm -f /tmp/tmp.23YZopw52r /tmp/tmp.4baRui9IGG /tmp/tmp.xOzX8qhDS4 00:22:32.788 00:22:32.788 real 1m27.110s 00:22:32.788 user 2m9.536s 00:22:32.788 sys 0m33.002s 00:22:32.788 08:56:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:32.788 08:56:49 -- common/autotest_common.sh@10 -- # set +x 00:22:32.788 ************************************ 00:22:32.788 END TEST nvmf_tls 00:22:32.788 ************************************ 00:22:32.788 08:56:50 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:32.788 08:56:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:32.788 08:56:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:32.788 08:56:50 -- common/autotest_common.sh@10 -- # set +x 00:22:33.046 ************************************ 00:22:33.046 START TEST nvmf_fips 00:22:33.046 ************************************ 00:22:33.046 08:56:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:33.046 * Looking for test storage... 00:22:33.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:33.046 08:56:50 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.046 08:56:50 -- nvmf/common.sh@7 -- # uname -s 00:22:33.046 08:56:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.046 08:56:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.046 08:56:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.046 08:56:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.046 08:56:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.046 08:56:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.046 08:56:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.046 08:56:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.046 08:56:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.046 08:56:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.046 08:56:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:33.046 08:56:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:33.305 08:56:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.305 08:56:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.305 08:56:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.305 08:56:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.305 08:56:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.305 08:56:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.305 08:56:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.305 08:56:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.305 08:56:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.305 08:56:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.305 08:56:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.305 08:56:50 -- paths/export.sh@5 -- # export PATH 00:22:33.305 08:56:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.305 08:56:50 -- nvmf/common.sh@47 -- # : 0 00:22:33.305 08:56:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:33.305 08:56:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:33.305 08:56:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.305 08:56:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.305 08:56:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.305 08:56:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:33.305 08:56:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:33.305 08:56:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:33.305 08:56:50 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:33.305 08:56:50 -- fips/fips.sh@89 -- # check_openssl_version 00:22:33.305 08:56:50 -- fips/fips.sh@83 -- # local target=3.0.0 00:22:33.305 08:56:50 -- fips/fips.sh@85 -- # openssl version 00:22:33.305 08:56:50 -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:33.305 08:56:50 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:33.305 08:56:50 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:33.305 08:56:50 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:33.305 08:56:50 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:33.305 08:56:50 -- scripts/common.sh@333 -- # IFS=.-: 00:22:33.305 08:56:50 -- scripts/common.sh@333 -- # read -ra ver1 00:22:33.305 08:56:50 -- scripts/common.sh@334 -- # IFS=.-: 00:22:33.305 08:56:50 -- scripts/common.sh@334 -- # read -ra ver2 00:22:33.305 08:56:50 -- scripts/common.sh@335 -- # local 'op=>=' 00:22:33.305 08:56:50 -- scripts/common.sh@337 -- # ver1_l=3 00:22:33.306 08:56:50 -- scripts/common.sh@338 -- # ver2_l=3 00:22:33.306 08:56:50 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:33.306 08:56:50 -- scripts/common.sh@341 -- # case "$op" in 00:22:33.306 08:56:50 -- scripts/common.sh@345 -- # : 1 00:22:33.306 08:56:50 -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:33.306 08:56:50 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.306 08:56:50 -- scripts/common.sh@362 -- # decimal 3 00:22:33.306 08:56:50 -- scripts/common.sh@350 -- # local d=3 00:22:33.306 08:56:50 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:33.306 08:56:50 -- scripts/common.sh@352 -- # echo 3 00:22:33.306 08:56:50 -- scripts/common.sh@362 -- # ver1[v]=3 00:22:33.306 08:56:50 -- scripts/common.sh@363 -- # decimal 3 00:22:33.306 08:56:50 -- scripts/common.sh@350 -- # local d=3 00:22:33.306 08:56:50 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:33.306 08:56:50 -- scripts/common.sh@352 -- # echo 3 00:22:33.306 08:56:50 -- scripts/common.sh@363 -- # ver2[v]=3 00:22:33.306 08:56:50 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:33.306 08:56:50 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:33.306 08:56:50 -- scripts/common.sh@361 -- # (( v++ )) 00:22:33.306 08:56:50 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.306 08:56:50 -- scripts/common.sh@362 -- # decimal 0 00:22:33.306 08:56:50 -- scripts/common.sh@350 -- # local d=0 00:22:33.306 08:56:50 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:33.306 08:56:50 -- scripts/common.sh@352 -- # echo 0 00:22:33.306 08:56:50 -- scripts/common.sh@362 -- # ver1[v]=0 00:22:33.306 08:56:50 -- scripts/common.sh@363 -- # decimal 0 00:22:33.306 08:56:50 -- scripts/common.sh@350 -- # local d=0 00:22:33.306 08:56:50 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:33.306 08:56:50 -- scripts/common.sh@352 -- # echo 0 00:22:33.306 08:56:50 -- scripts/common.sh@363 -- # ver2[v]=0 00:22:33.306 08:56:50 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:33.306 08:56:50 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:33.306 08:56:50 -- scripts/common.sh@361 -- # (( v++ )) 00:22:33.306 08:56:50 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.306 08:56:50 -- scripts/common.sh@362 -- # decimal 9 00:22:33.306 08:56:50 -- scripts/common.sh@350 -- # local d=9 00:22:33.306 08:56:50 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:33.306 08:56:50 -- scripts/common.sh@352 -- # echo 9 00:22:33.306 08:56:50 -- scripts/common.sh@362 -- # ver1[v]=9 00:22:33.306 08:56:50 -- scripts/common.sh@363 -- # decimal 0 00:22:33.306 08:56:50 -- scripts/common.sh@350 -- # local d=0 00:22:33.306 08:56:50 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:33.306 08:56:50 -- scripts/common.sh@352 -- # echo 0 00:22:33.306 08:56:50 -- scripts/common.sh@363 -- # ver2[v]=0 00:22:33.306 08:56:50 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:33.306 08:56:50 -- scripts/common.sh@364 -- # return 0 00:22:33.306 08:56:50 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:33.306 08:56:50 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:33.306 08:56:50 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:33.306 08:56:50 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:33.306 08:56:50 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:33.306 08:56:50 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:33.306 08:56:50 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:33.306 08:56:50 -- fips/fips.sh@113 -- # build_openssl_config 00:22:33.306 08:56:50 -- fips/fips.sh@37 -- # cat 00:22:33.306 08:56:50 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:33.306 08:56:50 -- fips/fips.sh@58 -- # cat - 00:22:33.306 08:56:50 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:33.306 08:56:50 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:33.306 08:56:50 -- fips/fips.sh@116 -- # mapfile -t providers 00:22:33.306 08:56:50 -- fips/fips.sh@116 -- # openssl list -providers 00:22:33.306 08:56:50 -- fips/fips.sh@116 -- # grep name 00:22:33.306 08:56:50 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:33.306 08:56:50 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:33.306 08:56:50 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:33.306 08:56:50 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:33.306 08:56:50 -- common/autotest_common.sh@638 -- # local es=0 00:22:33.306 08:56:50 -- fips/fips.sh@127 -- # : 00:22:33.306 08:56:50 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:33.306 08:56:50 -- common/autotest_common.sh@626 -- # local arg=openssl 00:22:33.306 08:56:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:33.306 08:56:50 -- common/autotest_common.sh@630 -- # type -t openssl 00:22:33.306 08:56:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:33.306 08:56:50 -- common/autotest_common.sh@632 -- # type -P openssl 00:22:33.306 08:56:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:33.306 08:56:50 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:22:33.306 08:56:50 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:22:33.306 08:56:50 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:22:33.306 Error setting digest 00:22:33.306 00722C6F0C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:33.306 00722C6F0C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:33.306 08:56:50 -- common/autotest_common.sh@641 -- # es=1 00:22:33.306 08:56:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:33.306 08:56:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:33.306 08:56:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:33.306 08:56:50 -- fips/fips.sh@130 -- # nvmftestinit 00:22:33.306 08:56:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:33.306 08:56:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.306 08:56:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:33.306 08:56:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:33.306 08:56:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:33.306 08:56:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.306 08:56:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.306 08:56:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.306 08:56:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:33.306 08:56:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:33.306 08:56:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:33.306 08:56:50 -- common/autotest_common.sh@10 -- # set +x 00:22:39.867 08:56:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:39.867 08:56:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.867 08:56:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.867 08:56:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.867 08:56:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.867 08:56:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.867 08:56:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.867 08:56:56 -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.867 08:56:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.867 08:56:56 -- nvmf/common.sh@296 -- # e810=() 00:22:39.867 08:56:56 -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.867 08:56:56 -- nvmf/common.sh@297 -- # x722=() 00:22:39.867 08:56:56 -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.867 08:56:56 -- nvmf/common.sh@298 -- # mlx=() 00:22:39.867 08:56:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.867 08:56:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.867 08:56:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.867 08:56:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.867 08:56:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.867 08:56:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.867 08:56:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.867 08:56:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.867 08:56:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.867 08:56:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.867 08:56:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.867 08:56:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.867 08:56:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.867 08:56:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.867 08:56:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.867 08:56:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.867 08:56:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:39.867 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:39.867 08:56:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.867 08:56:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:39.867 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:39.867 08:56:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.867 08:56:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.868 08:56:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.868 08:56:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.868 08:56:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.868 08:56:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.868 08:56:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:39.868 08:56:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.868 08:56:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:39.868 Found net devices under 0000:af:00.0: cvl_0_0 00:22:39.868 08:56:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.868 08:56:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.868 08:56:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.868 08:56:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:39.868 08:56:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.868 08:56:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:39.868 Found net devices under 0000:af:00.1: cvl_0_1 00:22:39.868 08:56:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.868 08:56:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:39.868 08:56:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:39.868 08:56:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:39.868 08:56:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:39.868 08:56:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:39.868 08:56:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.868 08:56:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.868 08:56:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.868 08:56:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:39.868 08:56:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.868 08:56:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.868 08:56:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:39.868 08:56:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.868 08:56:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.868 08:56:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:39.868 08:56:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:39.868 08:56:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.868 08:56:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.126 08:56:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.126 08:56:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.126 08:56:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:40.126 08:56:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.126 08:56:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.126 08:56:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.126 08:56:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:40.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:22:40.126 00:22:40.126 --- 10.0.0.2 ping statistics --- 00:22:40.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.126 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:22:40.126 08:56:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:22:40.126 00:22:40.126 --- 10.0.0.1 ping statistics --- 00:22:40.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.127 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:22:40.127 08:56:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.127 08:56:57 -- nvmf/common.sh@411 -- # return 0 00:22:40.127 08:56:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:40.127 08:56:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.127 08:56:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:40.127 08:56:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:40.127 08:56:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.127 08:56:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:40.127 08:56:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:40.127 08:56:57 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:40.127 08:56:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:40.127 08:56:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:40.127 08:56:57 -- common/autotest_common.sh@10 -- # set +x 00:22:40.127 08:56:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:40.127 08:56:57 -- nvmf/common.sh@470 -- # nvmfpid=2121011 00:22:40.127 08:56:57 -- nvmf/common.sh@471 -- # waitforlisten 2121011 00:22:40.127 08:56:57 -- common/autotest_common.sh@817 -- # '[' -z 2121011 ']' 00:22:40.127 08:56:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.127 08:56:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:40.127 08:56:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.127 08:56:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:40.127 08:56:57 -- common/autotest_common.sh@10 -- # set +x 00:22:40.386 [2024-04-26 08:56:57.418765] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:40.386 [2024-04-26 08:56:57.418814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.386 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.386 [2024-04-26 08:56:57.488819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.386 [2024-04-26 08:56:57.558911] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.386 [2024-04-26 08:56:57.558946] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.386 [2024-04-26 08:56:57.558958] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.386 [2024-04-26 08:56:57.558967] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.386 [2024-04-26 08:56:57.558975] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.386 [2024-04-26 08:56:57.559000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.325 08:56:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:41.325 08:56:58 -- common/autotest_common.sh@850 -- # return 0 00:22:41.325 08:56:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:41.325 08:56:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:41.325 08:56:58 -- common/autotest_common.sh@10 -- # set +x 00:22:41.325 08:56:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.325 08:56:58 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:41.325 08:56:58 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:41.325 08:56:58 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.325 08:56:58 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:41.325 08:56:58 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.325 08:56:58 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.325 08:56:58 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:41.325 08:56:58 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.325 [2024-04-26 08:56:58.412518] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.325 [2024-04-26 08:56:58.428519] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:41.325 [2024-04-26 08:56:58.428685] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.325 [2024-04-26 08:56:58.456799] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:41.325 malloc0 00:22:41.325 08:56:58 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.325 08:56:58 -- fips/fips.sh@147 -- # bdevperf_pid=2121256 00:22:41.325 08:56:58 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.325 08:56:58 -- fips/fips.sh@148 -- # waitforlisten 2121256 /var/tmp/bdevperf.sock 00:22:41.325 08:56:58 -- common/autotest_common.sh@817 -- # '[' -z 2121256 ']' 00:22:41.325 08:56:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.325 08:56:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:41.325 08:56:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.325 08:56:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:41.325 08:56:58 -- common/autotest_common.sh@10 -- # set +x 00:22:41.325 [2024-04-26 08:56:58.536301] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:22:41.325 [2024-04-26 08:56:58.536354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121256 ] 00:22:41.325 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.583 [2024-04-26 08:56:58.602405] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.583 [2024-04-26 08:56:58.669021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.150 08:56:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:42.150 08:56:59 -- common/autotest_common.sh@850 -- # return 0 00:22:42.150 08:56:59 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:42.409 [2024-04-26 08:56:59.479719] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.409 [2024-04-26 08:56:59.479804] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:42.409 TLSTESTn1 00:22:42.409 08:56:59 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:42.668 Running I/O for 10 seconds... 00:22:52.646 00:22:52.646 Latency(us) 00:22:52.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.646 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:52.646 Verification LBA range: start 0x0 length 0x2000 00:22:52.646 TLSTESTn1 : 10.07 1451.55 5.67 0.00 0.00 87931.59 5976.88 135056.59 00:22:52.646 =================================================================================================================== 00:22:52.646 Total : 1451.55 5.67 0.00 0.00 87931.59 5976.88 135056.59 00:22:52.646 0 00:22:52.646 08:57:09 -- fips/fips.sh@1 -- # cleanup 00:22:52.646 08:57:09 -- fips/fips.sh@15 -- # process_shm --id 0 00:22:52.646 08:57:09 -- common/autotest_common.sh@794 -- # type=--id 00:22:52.646 08:57:09 -- common/autotest_common.sh@795 -- # id=0 00:22:52.646 08:57:09 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:22:52.646 08:57:09 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:52.646 08:57:09 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:22:52.646 08:57:09 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:22:52.646 08:57:09 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:22:52.646 08:57:09 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:52.646 nvmf_trace.0 00:22:52.646 08:57:09 -- common/autotest_common.sh@809 -- # return 0 00:22:52.646 08:57:09 -- fips/fips.sh@16 -- # killprocess 2121256 00:22:52.646 08:57:09 -- common/autotest_common.sh@936 -- # '[' -z 2121256 ']' 00:22:52.646 08:57:09 -- common/autotest_common.sh@940 -- # kill -0 2121256 00:22:52.646 08:57:09 -- common/autotest_common.sh@941 -- # uname 00:22:52.646 08:57:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:52.646 08:57:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2121256 00:22:52.905 08:57:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:52.905 08:57:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:52.905 08:57:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2121256' 00:22:52.905 killing process with pid 2121256 00:22:52.905 08:57:09 -- common/autotest_common.sh@955 -- # kill 2121256 00:22:52.905 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.905 00:22:52.905 Latency(us) 00:22:52.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.905 =================================================================================================================== 00:22:52.905 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.905 [2024-04-26 08:57:09.898717] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:52.905 08:57:09 -- common/autotest_common.sh@960 -- # wait 2121256 00:22:52.905 08:57:10 -- fips/fips.sh@17 -- # nvmftestfini 00:22:52.905 08:57:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:52.905 08:57:10 -- nvmf/common.sh@117 -- # sync 00:22:52.905 08:57:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.905 08:57:10 -- nvmf/common.sh@120 -- # set +e 00:22:52.905 08:57:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.905 08:57:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.905 rmmod nvme_tcp 00:22:52.905 rmmod nvme_fabrics 00:22:52.905 rmmod nvme_keyring 00:22:53.164 08:57:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.164 08:57:10 -- nvmf/common.sh@124 -- # set -e 00:22:53.164 08:57:10 -- nvmf/common.sh@125 -- # return 0 00:22:53.164 08:57:10 -- nvmf/common.sh@478 -- # '[' -n 2121011 ']' 00:22:53.164 08:57:10 -- nvmf/common.sh@479 -- # killprocess 2121011 00:22:53.164 08:57:10 -- common/autotest_common.sh@936 -- # '[' -z 2121011 ']' 00:22:53.164 08:57:10 -- common/autotest_common.sh@940 -- # kill -0 2121011 00:22:53.164 08:57:10 -- common/autotest_common.sh@941 -- # uname 00:22:53.164 08:57:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:53.164 08:57:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2121011 00:22:53.164 08:57:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:53.164 08:57:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:53.164 08:57:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2121011' 00:22:53.164 killing process with pid 2121011 00:22:53.164 08:57:10 -- common/autotest_common.sh@955 -- # kill 2121011 00:22:53.164 [2024-04-26 08:57:10.233014] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:53.164 08:57:10 -- common/autotest_common.sh@960 -- # wait 2121011 00:22:53.424 08:57:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:53.424 08:57:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:53.424 08:57:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:53.424 08:57:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.424 08:57:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.424 08:57:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.424 08:57:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.424 08:57:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.329 08:57:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.329 08:57:12 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:55.329 00:22:55.329 real 0m22.351s 00:22:55.329 user 0m22.757s 00:22:55.329 sys 0m10.538s 00:22:55.329 08:57:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:55.329 08:57:12 -- common/autotest_common.sh@10 -- # set +x 00:22:55.329 ************************************ 00:22:55.329 END TEST nvmf_fips 00:22:55.329 ************************************ 00:22:55.329 08:57:12 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:22:55.329 08:57:12 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:22:55.329 08:57:12 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:22:55.329 08:57:12 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:22:55.329 08:57:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.329 08:57:12 -- common/autotest_common.sh@10 -- # set +x 00:23:01.920 08:57:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:01.920 08:57:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.920 08:57:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.920 08:57:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.920 08:57:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.920 08:57:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.920 08:57:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.920 08:57:18 -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.920 08:57:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.920 08:57:18 -- nvmf/common.sh@296 -- # e810=() 00:23:01.920 08:57:18 -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.920 08:57:18 -- nvmf/common.sh@297 -- # x722=() 00:23:01.920 08:57:18 -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.920 08:57:18 -- nvmf/common.sh@298 -- # mlx=() 00:23:01.920 08:57:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.920 08:57:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.920 08:57:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.920 08:57:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.920 08:57:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.920 08:57:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.921 08:57:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.921 08:57:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.921 08:57:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.921 08:57:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.921 08:57:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.921 08:57:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.921 08:57:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.921 08:57:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.921 08:57:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.921 08:57:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.921 08:57:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:01.921 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:01.921 08:57:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.921 08:57:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:01.921 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:01.921 08:57:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.921 08:57:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.921 08:57:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.921 08:57:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:01.921 08:57:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.921 08:57:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:01.921 Found net devices under 0000:af:00.0: cvl_0_0 00:23:01.921 08:57:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.921 08:57:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.921 08:57:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.921 08:57:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:01.921 08:57:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.921 08:57:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:01.921 Found net devices under 0000:af:00.1: cvl_0_1 00:23:01.921 08:57:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.921 08:57:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:01.921 08:57:18 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.921 08:57:18 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:23:01.921 08:57:18 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:01.921 08:57:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:01.921 08:57:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:01.921 08:57:18 -- common/autotest_common.sh@10 -- # set +x 00:23:01.921 ************************************ 00:23:01.921 START TEST nvmf_perf_adq 00:23:01.921 ************************************ 00:23:01.921 08:57:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:01.921 * Looking for test storage... 00:23:01.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:01.921 08:57:18 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.921 08:57:18 -- nvmf/common.sh@7 -- # uname -s 00:23:01.921 08:57:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.921 08:57:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.921 08:57:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.921 08:57:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.921 08:57:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.921 08:57:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.921 08:57:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.921 08:57:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.921 08:57:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.921 08:57:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.921 08:57:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:01.921 08:57:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:01.921 08:57:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.921 08:57:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.921 08:57:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.921 08:57:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.921 08:57:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.921 08:57:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.921 08:57:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.921 08:57:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.921 08:57:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.921 08:57:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.921 08:57:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.921 08:57:18 -- paths/export.sh@5 -- # export PATH 00:23:01.921 08:57:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.921 08:57:18 -- nvmf/common.sh@47 -- # : 0 00:23:01.921 08:57:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:01.921 08:57:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:01.921 08:57:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.921 08:57:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.921 08:57:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.921 08:57:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:01.921 08:57:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:01.921 08:57:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:01.921 08:57:18 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:01.921 08:57:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:01.921 08:57:18 -- common/autotest_common.sh@10 -- # set +x 00:23:08.490 08:57:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:08.490 08:57:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:08.490 08:57:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:08.490 08:57:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:08.490 08:57:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:08.490 08:57:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:08.490 08:57:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:08.490 08:57:25 -- nvmf/common.sh@295 -- # net_devs=() 00:23:08.490 08:57:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:08.490 08:57:25 -- nvmf/common.sh@296 -- # e810=() 00:23:08.490 08:57:25 -- nvmf/common.sh@296 -- # local -ga e810 00:23:08.490 08:57:25 -- nvmf/common.sh@297 -- # x722=() 00:23:08.490 08:57:25 -- nvmf/common.sh@297 -- # local -ga x722 00:23:08.490 08:57:25 -- nvmf/common.sh@298 -- # mlx=() 00:23:08.490 08:57:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:08.490 08:57:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.490 08:57:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.490 08:57:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.490 08:57:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.490 08:57:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.490 08:57:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.490 08:57:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.490 08:57:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.490 08:57:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.490 08:57:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.490 08:57:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.490 08:57:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:08.490 08:57:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:08.490 08:57:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:08.490 08:57:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.490 08:57:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:08.490 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:08.490 08:57:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:08.490 08:57:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:08.490 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:08.490 08:57:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:08.490 08:57:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:08.490 08:57:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.490 08:57:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.490 08:57:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:08.490 08:57:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.490 08:57:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:08.490 Found net devices under 0000:af:00.0: cvl_0_0 00:23:08.490 08:57:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.490 08:57:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:08.490 08:57:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.490 08:57:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:08.490 08:57:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.490 08:57:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:08.490 Found net devices under 0000:af:00.1: cvl_0_1 00:23:08.490 08:57:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.490 08:57:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:08.490 08:57:25 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.490 08:57:25 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:08.490 08:57:25 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:08.490 08:57:25 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:23:08.490 08:57:25 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:09.426 08:57:26 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:11.960 08:57:28 -- target/perf_adq.sh@54 -- # sleep 5 00:23:17.236 08:57:33 -- target/perf_adq.sh@67 -- # nvmftestinit 00:23:17.236 08:57:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:17.236 08:57:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.236 08:57:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:17.236 08:57:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:17.236 08:57:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:17.236 08:57:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.236 08:57:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.236 08:57:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.236 08:57:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:17.236 08:57:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:17.236 08:57:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.236 08:57:33 -- common/autotest_common.sh@10 -- # set +x 00:23:17.236 08:57:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:17.236 08:57:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.236 08:57:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.236 08:57:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.236 08:57:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.236 08:57:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.236 08:57:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.236 08:57:33 -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.236 08:57:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.236 08:57:33 -- nvmf/common.sh@296 -- # e810=() 00:23:17.236 08:57:33 -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.236 08:57:33 -- nvmf/common.sh@297 -- # x722=() 00:23:17.236 08:57:33 -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.236 08:57:33 -- nvmf/common.sh@298 -- # mlx=() 00:23:17.236 08:57:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.236 08:57:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.236 08:57:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.236 08:57:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.236 08:57:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.236 08:57:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.237 08:57:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.237 08:57:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.237 08:57:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.237 08:57:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.237 08:57:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.237 08:57:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.237 08:57:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.237 08:57:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.237 08:57:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.237 08:57:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.237 08:57:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:17.237 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:17.237 08:57:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.237 08:57:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:17.237 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:17.237 08:57:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.237 08:57:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.237 08:57:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.237 08:57:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:17.237 08:57:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.237 08:57:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:17.237 Found net devices under 0000:af:00.0: cvl_0_0 00:23:17.237 08:57:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.237 08:57:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.237 08:57:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.237 08:57:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:17.237 08:57:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.237 08:57:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:17.237 Found net devices under 0000:af:00.1: cvl_0_1 00:23:17.237 08:57:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.237 08:57:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:17.237 08:57:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:17.237 08:57:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:17.237 08:57:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:17.237 08:57:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.237 08:57:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.237 08:57:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.237 08:57:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.237 08:57:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.237 08:57:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.237 08:57:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.237 08:57:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.237 08:57:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.237 08:57:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.237 08:57:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.237 08:57:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.237 08:57:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.237 08:57:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.237 08:57:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.237 08:57:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.237 08:57:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.237 08:57:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.237 08:57:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.237 08:57:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:23:17.237 00:23:17.237 --- 10.0.0.2 ping statistics --- 00:23:17.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.237 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:23:17.237 08:57:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:23:17.237 00:23:17.237 --- 10.0.0.1 ping statistics --- 00:23:17.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.237 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:23:17.237 08:57:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.237 08:57:34 -- nvmf/common.sh@411 -- # return 0 00:23:17.237 08:57:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:17.237 08:57:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.237 08:57:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:17.237 08:57:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:17.237 08:57:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.237 08:57:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:17.237 08:57:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:17.237 08:57:34 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:17.237 08:57:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:17.237 08:57:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:17.237 08:57:34 -- common/autotest_common.sh@10 -- # set +x 00:23:17.237 08:57:34 -- nvmf/common.sh@470 -- # nvmfpid=2132302 00:23:17.237 08:57:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:17.237 08:57:34 -- nvmf/common.sh@471 -- # waitforlisten 2132302 00:23:17.237 08:57:34 -- common/autotest_common.sh@817 -- # '[' -z 2132302 ']' 00:23:17.237 08:57:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.237 08:57:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:17.237 08:57:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.237 08:57:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:17.237 08:57:34 -- common/autotest_common.sh@10 -- # set +x 00:23:17.237 [2024-04-26 08:57:34.192437] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:23:17.237 [2024-04-26 08:57:34.192489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.237 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.237 [2024-04-26 08:57:34.267485] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.237 [2024-04-26 08:57:34.340636] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.237 [2024-04-26 08:57:34.340672] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.237 [2024-04-26 08:57:34.340682] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.237 [2024-04-26 08:57:34.340691] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.237 [2024-04-26 08:57:34.340698] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.237 [2024-04-26 08:57:34.340787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.237 [2024-04-26 08:57:34.340878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.237 [2024-04-26 08:57:34.340961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:17.237 [2024-04-26 08:57:34.340963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.806 08:57:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:17.806 08:57:34 -- common/autotest_common.sh@850 -- # return 0 00:23:17.806 08:57:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:17.806 08:57:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:17.806 08:57:34 -- common/autotest_common.sh@10 -- # set +x 00:23:17.806 08:57:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.806 08:57:35 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:23:17.806 08:57:35 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:17.806 08:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.806 08:57:35 -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 08:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.066 08:57:35 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:18.066 08:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.066 08:57:35 -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 08:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.066 08:57:35 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:18.066 08:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.066 08:57:35 -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 [2024-04-26 08:57:35.158221] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.066 08:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.066 08:57:35 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:18.066 08:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.066 08:57:35 -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 Malloc1 00:23:18.066 08:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.066 08:57:35 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.066 08:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.066 08:57:35 -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 08:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.066 08:57:35 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:18.066 08:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.066 08:57:35 -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 08:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.066 08:57:35 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.066 08:57:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:18.066 08:57:35 -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 [2024-04-26 08:57:35.204690] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.066 08:57:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:18.066 08:57:35 -- target/perf_adq.sh@73 -- # perfpid=2132498 00:23:18.066 08:57:35 -- target/perf_adq.sh@74 -- # sleep 2 00:23:18.066 08:57:35 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:18.066 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.988 08:57:37 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:23:19.988 08:57:37 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:19.988 08:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.988 08:57:37 -- target/perf_adq.sh@76 -- # wc -l 00:23:19.988 08:57:37 -- common/autotest_common.sh@10 -- # set +x 00:23:20.247 08:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:20.247 08:57:37 -- target/perf_adq.sh@76 -- # count=4 00:23:20.247 08:57:37 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:23:20.247 08:57:37 -- target/perf_adq.sh@81 -- # wait 2132498 00:23:28.374 Initializing NVMe Controllers 00:23:28.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:28.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:28.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:28.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:28.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:28.374 Initialization complete. Launching workers. 00:23:28.374 ======================================================== 00:23:28.374 Latency(us) 00:23:28.374 Device Information : IOPS MiB/s Average min max 00:23:28.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9444.50 36.89 6776.74 1791.80 11169.92 00:23:28.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9207.10 35.97 6950.96 1710.60 13010.88 00:23:28.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9611.10 37.54 6659.93 1775.49 12889.81 00:23:28.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9482.50 37.04 6748.94 1673.94 12754.60 00:23:28.374 ======================================================== 00:23:28.374 Total : 37745.19 147.44 6782.51 1673.94 13010.88 00:23:28.374 00:23:28.374 08:57:45 -- target/perf_adq.sh@82 -- # nvmftestfini 00:23:28.374 08:57:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:28.374 08:57:45 -- nvmf/common.sh@117 -- # sync 00:23:28.374 08:57:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:28.374 08:57:45 -- nvmf/common.sh@120 -- # set +e 00:23:28.374 08:57:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:28.374 08:57:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:28.374 rmmod nvme_tcp 00:23:28.374 rmmod nvme_fabrics 00:23:28.374 rmmod nvme_keyring 00:23:28.374 08:57:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:28.374 08:57:45 -- nvmf/common.sh@124 -- # set -e 00:23:28.374 08:57:45 -- nvmf/common.sh@125 -- # return 0 00:23:28.374 08:57:45 -- nvmf/common.sh@478 -- # '[' -n 2132302 ']' 00:23:28.374 08:57:45 -- nvmf/common.sh@479 -- # killprocess 2132302 00:23:28.374 08:57:45 -- common/autotest_common.sh@936 -- # '[' -z 2132302 ']' 00:23:28.374 08:57:45 -- common/autotest_common.sh@940 -- # kill -0 2132302 00:23:28.374 08:57:45 -- common/autotest_common.sh@941 -- # uname 00:23:28.374 08:57:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:28.374 08:57:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2132302 00:23:28.374 08:57:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:28.374 08:57:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:28.374 08:57:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2132302' 00:23:28.374 killing process with pid 2132302 00:23:28.374 08:57:45 -- common/autotest_common.sh@955 -- # kill 2132302 00:23:28.374 08:57:45 -- common/autotest_common.sh@960 -- # wait 2132302 00:23:28.633 08:57:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:28.633 08:57:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:28.633 08:57:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:28.633 08:57:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.633 08:57:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.633 08:57:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.633 08:57:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.633 08:57:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.173 08:57:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:31.173 08:57:47 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:23:31.173 08:57:47 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:32.112 08:57:49 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:34.648 08:57:51 -- target/perf_adq.sh@54 -- # sleep 5 00:23:39.936 08:57:56 -- target/perf_adq.sh@87 -- # nvmftestinit 00:23:39.936 08:57:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:39.936 08:57:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.936 08:57:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:39.936 08:57:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:39.936 08:57:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:39.936 08:57:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.936 08:57:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.936 08:57:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.936 08:57:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:39.936 08:57:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.936 08:57:56 -- common/autotest_common.sh@10 -- # set +x 00:23:39.936 08:57:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:39.936 08:57:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.936 08:57:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.936 08:57:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.936 08:57:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.936 08:57:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.936 08:57:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.936 08:57:56 -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.936 08:57:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.936 08:57:56 -- nvmf/common.sh@296 -- # e810=() 00:23:39.936 08:57:56 -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.936 08:57:56 -- nvmf/common.sh@297 -- # x722=() 00:23:39.936 08:57:56 -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.936 08:57:56 -- nvmf/common.sh@298 -- # mlx=() 00:23:39.936 08:57:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.936 08:57:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.936 08:57:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.936 08:57:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.936 08:57:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.936 08:57:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.936 08:57:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.936 08:57:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.936 08:57:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.936 08:57:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.936 08:57:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.936 08:57:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.936 08:57:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.936 08:57:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.936 08:57:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.936 08:57:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.936 08:57:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:39.936 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:39.936 08:57:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.936 08:57:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:39.936 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:39.936 08:57:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.936 08:57:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.936 08:57:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.937 08:57:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.937 08:57:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:39.937 08:57:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.937 08:57:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:39.937 Found net devices under 0000:af:00.0: cvl_0_0 00:23:39.937 08:57:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.937 08:57:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.937 08:57:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.937 08:57:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:39.937 08:57:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.937 08:57:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:39.937 Found net devices under 0000:af:00.1: cvl_0_1 00:23:39.937 08:57:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.937 08:57:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:39.937 08:57:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:39.937 08:57:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:39.937 08:57:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:39.937 08:57:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:39.937 08:57:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.937 08:57:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.937 08:57:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.937 08:57:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.937 08:57:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.937 08:57:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.937 08:57:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.937 08:57:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.937 08:57:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.937 08:57:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.937 08:57:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.937 08:57:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.937 08:57:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.937 08:57:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.937 08:57:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.937 08:57:56 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.937 08:57:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.937 08:57:56 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.937 08:57:56 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.937 08:57:56 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:23:39.937 00:23:39.937 --- 10.0.0.2 ping statistics --- 00:23:39.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.937 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:23:39.937 08:57:56 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:23:39.937 00:23:39.937 --- 10.0.0.1 ping statistics --- 00:23:39.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.937 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:23:39.937 08:57:56 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.937 08:57:56 -- nvmf/common.sh@411 -- # return 0 00:23:39.937 08:57:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:39.937 08:57:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.937 08:57:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:39.937 08:57:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:39.937 08:57:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.937 08:57:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:39.937 08:57:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:39.937 08:57:56 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:23:39.937 08:57:56 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:39.937 08:57:56 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:39.937 08:57:56 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:39.937 net.core.busy_poll = 1 00:23:39.937 08:57:56 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:39.937 net.core.busy_read = 1 00:23:39.937 08:57:56 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:39.937 08:57:56 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:39.937 08:57:56 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:39.937 08:57:56 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:39.937 08:57:56 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:39.937 08:57:56 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:39.937 08:57:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:39.937 08:57:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:39.937 08:57:56 -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 08:57:56 -- nvmf/common.sh@470 -- # nvmfpid=2136530 00:23:39.937 08:57:56 -- nvmf/common.sh@471 -- # waitforlisten 2136530 00:23:39.937 08:57:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:39.937 08:57:56 -- common/autotest_common.sh@817 -- # '[' -z 2136530 ']' 00:23:39.937 08:57:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.937 08:57:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:39.937 08:57:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.937 08:57:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:39.937 08:57:56 -- common/autotest_common.sh@10 -- # set +x 00:23:39.937 [2024-04-26 08:57:57.003737] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:23:39.937 [2024-04-26 08:57:57.003788] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.937 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.937 [2024-04-26 08:57:57.078509] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.937 [2024-04-26 08:57:57.147023] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.937 [2024-04-26 08:57:57.147069] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.937 [2024-04-26 08:57:57.147081] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.937 [2024-04-26 08:57:57.147090] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.937 [2024-04-26 08:57:57.147096] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.937 [2024-04-26 08:57:57.147152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.937 [2024-04-26 08:57:57.147219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.937 [2024-04-26 08:57:57.147302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.937 [2024-04-26 08:57:57.147303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.875 08:57:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:40.875 08:57:57 -- common/autotest_common.sh@850 -- # return 0 00:23:40.875 08:57:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:40.875 08:57:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:40.875 08:57:57 -- common/autotest_common.sh@10 -- # set +x 00:23:40.875 08:57:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.875 08:57:57 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:23:40.875 08:57:57 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:40.875 08:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.875 08:57:57 -- common/autotest_common.sh@10 -- # set +x 00:23:40.875 08:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.875 08:57:57 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:40.875 08:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.875 08:57:57 -- common/autotest_common.sh@10 -- # set +x 00:23:40.875 08:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.875 08:57:57 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:40.875 08:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.875 08:57:57 -- common/autotest_common.sh@10 -- # set +x 00:23:40.875 [2024-04-26 08:57:57.954229] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.875 08:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.875 08:57:57 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:40.875 08:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.875 08:57:57 -- common/autotest_common.sh@10 -- # set +x 00:23:40.875 Malloc1 00:23:40.875 08:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.875 08:57:57 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:40.875 08:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.875 08:57:57 -- common/autotest_common.sh@10 -- # set +x 00:23:40.875 08:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.875 08:57:57 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:40.875 08:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.875 08:57:57 -- common/autotest_common.sh@10 -- # set +x 00:23:40.875 08:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.875 08:57:58 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.875 08:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.875 08:57:58 -- common/autotest_common.sh@10 -- # set +x 00:23:40.875 [2024-04-26 08:57:58.004998] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.875 08:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.875 08:57:58 -- target/perf_adq.sh@94 -- # perfpid=2136710 00:23:40.875 08:57:58 -- target/perf_adq.sh@95 -- # sleep 2 00:23:40.875 08:57:58 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:40.875 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.782 08:58:00 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:23:42.782 08:58:00 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:42.782 08:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.782 08:58:00 -- target/perf_adq.sh@97 -- # wc -l 00:23:42.782 08:58:00 -- common/autotest_common.sh@10 -- # set +x 00:23:43.039 08:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.039 08:58:00 -- target/perf_adq.sh@97 -- # count=3 00:23:43.039 08:58:00 -- target/perf_adq.sh@98 -- # [[ 3 -lt 2 ]] 00:23:43.039 08:58:00 -- target/perf_adq.sh@103 -- # wait 2136710 00:23:51.194 Initializing NVMe Controllers 00:23:51.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:51.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:51.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:51.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:51.194 Initialization complete. Launching workers. 00:23:51.194 ======================================================== 00:23:51.194 Latency(us) 00:23:51.195 Device Information : IOPS MiB/s Average min max 00:23:51.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5544.20 21.66 11548.86 2292.11 56472.64 00:23:51.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5714.60 22.32 11198.33 1771.22 56188.84 00:23:51.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6062.30 23.68 10557.79 1940.20 55600.86 00:23:51.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5846.70 22.84 10946.61 1770.05 55769.10 00:23:51.195 ======================================================== 00:23:51.195 Total : 23167.79 90.50 11051.08 1770.05 56472.64 00:23:51.195 00:23:51.195 08:58:08 -- target/perf_adq.sh@104 -- # nvmftestfini 00:23:51.195 08:58:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:51.195 08:58:08 -- nvmf/common.sh@117 -- # sync 00:23:51.195 08:58:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:51.195 08:58:08 -- nvmf/common.sh@120 -- # set +e 00:23:51.195 08:58:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:51.195 08:58:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:51.195 rmmod nvme_tcp 00:23:51.195 rmmod nvme_fabrics 00:23:51.195 rmmod nvme_keyring 00:23:51.195 08:58:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:51.195 08:58:08 -- nvmf/common.sh@124 -- # set -e 00:23:51.195 08:58:08 -- nvmf/common.sh@125 -- # return 0 00:23:51.195 08:58:08 -- nvmf/common.sh@478 -- # '[' -n 2136530 ']' 00:23:51.195 08:58:08 -- nvmf/common.sh@479 -- # killprocess 2136530 00:23:51.195 08:58:08 -- common/autotest_common.sh@936 -- # '[' -z 2136530 ']' 00:23:51.195 08:58:08 -- common/autotest_common.sh@940 -- # kill -0 2136530 00:23:51.195 08:58:08 -- common/autotest_common.sh@941 -- # uname 00:23:51.195 08:58:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:51.195 08:58:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2136530 00:23:51.195 08:58:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:51.195 08:58:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:51.195 08:58:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2136530' 00:23:51.195 killing process with pid 2136530 00:23:51.195 08:58:08 -- common/autotest_common.sh@955 -- # kill 2136530 00:23:51.195 08:58:08 -- common/autotest_common.sh@960 -- # wait 2136530 00:23:51.455 08:58:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:51.455 08:58:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:51.455 08:58:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:51.455 08:58:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:51.455 08:58:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:51.455 08:58:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.455 08:58:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.455 08:58:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.363 08:58:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:53.363 08:58:10 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:23:53.363 00:23:53.363 real 0m51.767s 00:23:53.363 user 2m45.110s 00:23:53.363 sys 0m14.747s 00:23:53.363 08:58:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:53.363 08:58:10 -- common/autotest_common.sh@10 -- # set +x 00:23:53.363 ************************************ 00:23:53.363 END TEST nvmf_perf_adq 00:23:53.363 ************************************ 00:23:53.623 08:58:10 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:53.623 08:58:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:53.623 08:58:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:53.623 08:58:10 -- common/autotest_common.sh@10 -- # set +x 00:23:53.623 ************************************ 00:23:53.623 START TEST nvmf_shutdown 00:23:53.623 ************************************ 00:23:53.623 08:58:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:53.883 * Looking for test storage... 00:23:53.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:53.883 08:58:10 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.883 08:58:10 -- nvmf/common.sh@7 -- # uname -s 00:23:53.883 08:58:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.883 08:58:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.883 08:58:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.883 08:58:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.883 08:58:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.883 08:58:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.883 08:58:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.883 08:58:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.883 08:58:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.883 08:58:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.883 08:58:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:53.883 08:58:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:53.883 08:58:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.883 08:58:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.883 08:58:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.883 08:58:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.883 08:58:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.883 08:58:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.883 08:58:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.883 08:58:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.883 08:58:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.883 08:58:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.883 08:58:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.883 08:58:10 -- paths/export.sh@5 -- # export PATH 00:23:53.883 08:58:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.883 08:58:10 -- nvmf/common.sh@47 -- # : 0 00:23:53.883 08:58:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:53.883 08:58:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:53.883 08:58:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.883 08:58:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.883 08:58:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.883 08:58:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:53.883 08:58:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:53.883 08:58:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:53.883 08:58:10 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:53.883 08:58:10 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:53.883 08:58:10 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:53.883 08:58:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:53.883 08:58:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:53.883 08:58:10 -- common/autotest_common.sh@10 -- # set +x 00:23:53.883 ************************************ 00:23:53.883 START TEST nvmf_shutdown_tc1 00:23:53.883 ************************************ 00:23:53.883 08:58:11 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:23:53.883 08:58:11 -- target/shutdown.sh@74 -- # starttarget 00:23:53.883 08:58:11 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:53.883 08:58:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:53.883 08:58:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.883 08:58:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:53.883 08:58:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:53.883 08:58:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:53.883 08:58:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.883 08:58:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.883 08:58:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.883 08:58:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:53.883 08:58:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:53.883 08:58:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:53.883 08:58:11 -- common/autotest_common.sh@10 -- # set +x 00:24:00.491 08:58:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:00.491 08:58:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.491 08:58:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.491 08:58:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.491 08:58:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.491 08:58:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.491 08:58:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.491 08:58:17 -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.491 08:58:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.491 08:58:17 -- nvmf/common.sh@296 -- # e810=() 00:24:00.491 08:58:17 -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.491 08:58:17 -- nvmf/common.sh@297 -- # x722=() 00:24:00.491 08:58:17 -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.491 08:58:17 -- nvmf/common.sh@298 -- # mlx=() 00:24:00.491 08:58:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.491 08:58:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.491 08:58:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.491 08:58:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.491 08:58:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.491 08:58:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.491 08:58:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.491 08:58:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.491 08:58:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.491 08:58:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.491 08:58:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.491 08:58:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.491 08:58:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.491 08:58:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.491 08:58:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.491 08:58:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.492 08:58:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.492 08:58:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.492 08:58:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:00.492 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:00.492 08:58:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.492 08:58:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:00.492 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:00.492 08:58:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.492 08:58:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.492 08:58:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.492 08:58:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:00.492 08:58:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.492 08:58:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:00.492 Found net devices under 0000:af:00.0: cvl_0_0 00:24:00.492 08:58:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.492 08:58:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.492 08:58:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.492 08:58:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:00.492 08:58:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.492 08:58:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:00.492 Found net devices under 0000:af:00.1: cvl_0_1 00:24:00.492 08:58:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.492 08:58:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:00.492 08:58:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:00.492 08:58:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:00.492 08:58:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:00.492 08:58:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.492 08:58:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.492 08:58:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.492 08:58:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.492 08:58:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.492 08:58:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.492 08:58:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.492 08:58:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.492 08:58:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.492 08:58:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.492 08:58:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.492 08:58:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.492 08:58:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.492 08:58:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.492 08:58:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.492 08:58:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.492 08:58:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.752 08:58:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.752 08:58:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.752 08:58:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:24:00.752 00:24:00.752 --- 10.0.0.2 ping statistics --- 00:24:00.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.752 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:00.752 08:58:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:24:00.752 00:24:00.752 --- 10.0.0.1 ping statistics --- 00:24:00.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.752 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:24:00.752 08:58:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.752 08:58:17 -- nvmf/common.sh@411 -- # return 0 00:24:00.752 08:58:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:00.752 08:58:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.752 08:58:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:00.752 08:58:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:00.752 08:58:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.752 08:58:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:00.752 08:58:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:00.752 08:58:17 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:00.752 08:58:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:00.752 08:58:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:00.752 08:58:17 -- common/autotest_common.sh@10 -- # set +x 00:24:00.752 08:58:17 -- nvmf/common.sh@470 -- # nvmfpid=2142373 00:24:00.752 08:58:17 -- nvmf/common.sh@471 -- # waitforlisten 2142373 00:24:00.752 08:58:17 -- common/autotest_common.sh@817 -- # '[' -z 2142373 ']' 00:24:00.752 08:58:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.752 08:58:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:00.752 08:58:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.752 08:58:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:00.752 08:58:17 -- common/autotest_common.sh@10 -- # set +x 00:24:00.752 08:58:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:00.752 [2024-04-26 08:58:17.895923] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:24:00.752 [2024-04-26 08:58:17.895970] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.752 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.752 [2024-04-26 08:58:17.970757] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.012 [2024-04-26 08:58:18.044555] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.012 [2024-04-26 08:58:18.044601] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.012 [2024-04-26 08:58:18.044610] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.012 [2024-04-26 08:58:18.044618] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.012 [2024-04-26 08:58:18.044624] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.012 [2024-04-26 08:58:18.044723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.012 [2024-04-26 08:58:18.044806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.012 [2024-04-26 08:58:18.044916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.012 [2024-04-26 08:58:18.044918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:01.580 08:58:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:01.580 08:58:18 -- common/autotest_common.sh@850 -- # return 0 00:24:01.580 08:58:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:01.580 08:58:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:01.580 08:58:18 -- common/autotest_common.sh@10 -- # set +x 00:24:01.580 08:58:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.580 08:58:18 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.580 08:58:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.580 08:58:18 -- common/autotest_common.sh@10 -- # set +x 00:24:01.580 [2024-04-26 08:58:18.746196] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.580 08:58:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.580 08:58:18 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:01.580 08:58:18 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:01.580 08:58:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:01.580 08:58:18 -- common/autotest_common.sh@10 -- # set +x 00:24:01.580 08:58:18 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:01.580 08:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.580 08:58:18 -- target/shutdown.sh@28 -- # cat 00:24:01.580 08:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.580 08:58:18 -- target/shutdown.sh@28 -- # cat 00:24:01.580 08:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.580 08:58:18 -- target/shutdown.sh@28 -- # cat 00:24:01.580 08:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.580 08:58:18 -- target/shutdown.sh@28 -- # cat 00:24:01.580 08:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.580 08:58:18 -- target/shutdown.sh@28 -- # cat 00:24:01.580 08:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.580 08:58:18 -- target/shutdown.sh@28 -- # cat 00:24:01.580 08:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.580 08:58:18 -- target/shutdown.sh@28 -- # cat 00:24:01.580 08:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.580 08:58:18 -- target/shutdown.sh@28 -- # cat 00:24:01.580 08:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.580 08:58:18 -- target/shutdown.sh@28 -- # cat 00:24:01.580 08:58:18 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:01.580 08:58:18 -- target/shutdown.sh@28 -- # cat 00:24:01.580 08:58:18 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:01.580 08:58:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.580 08:58:18 -- common/autotest_common.sh@10 -- # set +x 00:24:01.839 Malloc1 00:24:01.839 [2024-04-26 08:58:18.856889] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.839 Malloc2 00:24:01.839 Malloc3 00:24:01.839 Malloc4 00:24:01.839 Malloc5 00:24:01.839 Malloc6 00:24:01.839 Malloc7 00:24:02.098 Malloc8 00:24:02.098 Malloc9 00:24:02.098 Malloc10 00:24:02.098 08:58:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.098 08:58:19 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:02.098 08:58:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:02.098 08:58:19 -- common/autotest_common.sh@10 -- # set +x 00:24:02.098 08:58:19 -- target/shutdown.sh@78 -- # perfpid=2142657 00:24:02.098 08:58:19 -- target/shutdown.sh@79 -- # waitforlisten 2142657 /var/tmp/bdevperf.sock 00:24:02.098 08:58:19 -- common/autotest_common.sh@817 -- # '[' -z 2142657 ']' 00:24:02.098 08:58:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.098 08:58:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:02.098 08:58:19 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:02.098 08:58:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.098 08:58:19 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:02.098 08:58:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:02.098 08:58:19 -- nvmf/common.sh@521 -- # config=() 00:24:02.098 08:58:19 -- common/autotest_common.sh@10 -- # set +x 00:24:02.098 08:58:19 -- nvmf/common.sh@521 -- # local subsystem config 00:24:02.098 08:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.098 08:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.098 { 00:24:02.098 "params": { 00:24:02.098 "name": "Nvme$subsystem", 00:24:02.098 "trtype": "$TEST_TRANSPORT", 00:24:02.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.098 "adrfam": "ipv4", 00:24:02.098 "trsvcid": "$NVMF_PORT", 00:24:02.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.098 "hdgst": ${hdgst:-false}, 00:24:02.098 "ddgst": ${ddgst:-false} 00:24:02.098 }, 00:24:02.098 "method": "bdev_nvme_attach_controller" 00:24:02.098 } 00:24:02.098 EOF 00:24:02.098 )") 00:24:02.098 08:58:19 -- nvmf/common.sh@543 -- # cat 00:24:02.098 08:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.098 08:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.098 { 00:24:02.098 "params": { 00:24:02.098 "name": "Nvme$subsystem", 00:24:02.098 "trtype": "$TEST_TRANSPORT", 00:24:02.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.098 "adrfam": "ipv4", 00:24:02.098 "trsvcid": "$NVMF_PORT", 00:24:02.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.098 "hdgst": ${hdgst:-false}, 00:24:02.098 "ddgst": ${ddgst:-false} 00:24:02.098 }, 00:24:02.099 "method": "bdev_nvme_attach_controller" 00:24:02.099 } 00:24:02.099 EOF 00:24:02.099 )") 00:24:02.099 08:58:19 -- nvmf/common.sh@543 -- # cat 00:24:02.099 08:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.099 08:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.099 { 00:24:02.099 "params": { 00:24:02.099 "name": "Nvme$subsystem", 00:24:02.099 "trtype": "$TEST_TRANSPORT", 00:24:02.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.099 "adrfam": "ipv4", 00:24:02.099 "trsvcid": "$NVMF_PORT", 00:24:02.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.099 "hdgst": ${hdgst:-false}, 00:24:02.099 "ddgst": ${ddgst:-false} 00:24:02.099 }, 00:24:02.099 "method": "bdev_nvme_attach_controller" 00:24:02.099 } 00:24:02.099 EOF 00:24:02.099 )") 00:24:02.099 08:58:19 -- nvmf/common.sh@543 -- # cat 00:24:02.099 08:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.099 08:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.099 { 00:24:02.099 "params": { 00:24:02.099 "name": "Nvme$subsystem", 00:24:02.099 "trtype": "$TEST_TRANSPORT", 00:24:02.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.099 "adrfam": "ipv4", 00:24:02.099 "trsvcid": "$NVMF_PORT", 00:24:02.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.099 "hdgst": ${hdgst:-false}, 00:24:02.099 "ddgst": ${ddgst:-false} 00:24:02.099 }, 00:24:02.099 "method": "bdev_nvme_attach_controller" 00:24:02.099 } 00:24:02.099 EOF 00:24:02.099 )") 00:24:02.099 08:58:19 -- nvmf/common.sh@543 -- # cat 00:24:02.099 08:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.099 08:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.099 { 00:24:02.099 "params": { 00:24:02.099 "name": "Nvme$subsystem", 00:24:02.099 "trtype": "$TEST_TRANSPORT", 00:24:02.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.099 "adrfam": "ipv4", 00:24:02.099 "trsvcid": "$NVMF_PORT", 00:24:02.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.099 "hdgst": ${hdgst:-false}, 00:24:02.099 "ddgst": ${ddgst:-false} 00:24:02.099 }, 00:24:02.099 "method": "bdev_nvme_attach_controller" 00:24:02.099 } 00:24:02.099 EOF 00:24:02.099 )") 00:24:02.099 08:58:19 -- nvmf/common.sh@543 -- # cat 00:24:02.099 [2024-04-26 08:58:19.334439] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:24:02.099 [2024-04-26 08:58:19.334500] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:02.099 08:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.099 08:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.099 { 00:24:02.099 "params": { 00:24:02.099 "name": "Nvme$subsystem", 00:24:02.099 "trtype": "$TEST_TRANSPORT", 00:24:02.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.099 "adrfam": "ipv4", 00:24:02.099 "trsvcid": "$NVMF_PORT", 00:24:02.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.099 "hdgst": ${hdgst:-false}, 00:24:02.099 "ddgst": ${ddgst:-false} 00:24:02.099 }, 00:24:02.099 "method": "bdev_nvme_attach_controller" 00:24:02.099 } 00:24:02.099 EOF 00:24:02.099 )") 00:24:02.099 08:58:19 -- nvmf/common.sh@543 -- # cat 00:24:02.099 08:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.099 08:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.099 { 00:24:02.099 "params": { 00:24:02.099 "name": "Nvme$subsystem", 00:24:02.099 "trtype": "$TEST_TRANSPORT", 00:24:02.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.099 "adrfam": "ipv4", 00:24:02.099 "trsvcid": "$NVMF_PORT", 00:24:02.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.099 "hdgst": ${hdgst:-false}, 00:24:02.099 "ddgst": ${ddgst:-false} 00:24:02.099 }, 00:24:02.099 "method": "bdev_nvme_attach_controller" 00:24:02.099 } 00:24:02.099 EOF 00:24:02.099 )") 00:24:02.358 08:58:19 -- nvmf/common.sh@543 -- # cat 00:24:02.358 08:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.358 08:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.358 { 00:24:02.358 "params": { 00:24:02.358 "name": "Nvme$subsystem", 00:24:02.358 "trtype": "$TEST_TRANSPORT", 00:24:02.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.358 "adrfam": "ipv4", 00:24:02.358 "trsvcid": "$NVMF_PORT", 00:24:02.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.358 "hdgst": ${hdgst:-false}, 00:24:02.358 "ddgst": ${ddgst:-false} 00:24:02.358 }, 00:24:02.358 "method": "bdev_nvme_attach_controller" 00:24:02.358 } 00:24:02.358 EOF 00:24:02.358 )") 00:24:02.358 08:58:19 -- nvmf/common.sh@543 -- # cat 00:24:02.358 08:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.358 08:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.358 { 00:24:02.358 "params": { 00:24:02.358 "name": "Nvme$subsystem", 00:24:02.358 "trtype": "$TEST_TRANSPORT", 00:24:02.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.358 "adrfam": "ipv4", 00:24:02.358 "trsvcid": "$NVMF_PORT", 00:24:02.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.358 "hdgst": ${hdgst:-false}, 00:24:02.358 "ddgst": ${ddgst:-false} 00:24:02.358 }, 00:24:02.358 "method": "bdev_nvme_attach_controller" 00:24:02.358 } 00:24:02.358 EOF 00:24:02.358 )") 00:24:02.358 08:58:19 -- nvmf/common.sh@543 -- # cat 00:24:02.358 08:58:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:02.358 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.358 08:58:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:02.358 { 00:24:02.358 "params": { 00:24:02.358 "name": "Nvme$subsystem", 00:24:02.358 "trtype": "$TEST_TRANSPORT", 00:24:02.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.358 "adrfam": "ipv4", 00:24:02.358 "trsvcid": "$NVMF_PORT", 00:24:02.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.358 "hdgst": ${hdgst:-false}, 00:24:02.358 "ddgst": ${ddgst:-false} 00:24:02.358 }, 00:24:02.358 "method": "bdev_nvme_attach_controller" 00:24:02.358 } 00:24:02.358 EOF 00:24:02.358 )") 00:24:02.358 08:58:19 -- nvmf/common.sh@543 -- # cat 00:24:02.358 08:58:19 -- nvmf/common.sh@545 -- # jq . 00:24:02.358 08:58:19 -- nvmf/common.sh@546 -- # IFS=, 00:24:02.358 08:58:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:02.358 "params": { 00:24:02.358 "name": "Nvme1", 00:24:02.358 "trtype": "tcp", 00:24:02.358 "traddr": "10.0.0.2", 00:24:02.358 "adrfam": "ipv4", 00:24:02.358 "trsvcid": "4420", 00:24:02.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.358 "hdgst": false, 00:24:02.358 "ddgst": false 00:24:02.358 }, 00:24:02.358 "method": "bdev_nvme_attach_controller" 00:24:02.359 },{ 00:24:02.359 "params": { 00:24:02.359 "name": "Nvme2", 00:24:02.359 "trtype": "tcp", 00:24:02.359 "traddr": "10.0.0.2", 00:24:02.359 "adrfam": "ipv4", 00:24:02.359 "trsvcid": "4420", 00:24:02.359 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:02.359 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:02.359 "hdgst": false, 00:24:02.359 "ddgst": false 00:24:02.359 }, 00:24:02.359 "method": "bdev_nvme_attach_controller" 00:24:02.359 },{ 00:24:02.359 "params": { 00:24:02.359 "name": "Nvme3", 00:24:02.359 "trtype": "tcp", 00:24:02.359 "traddr": "10.0.0.2", 00:24:02.359 "adrfam": "ipv4", 00:24:02.359 "trsvcid": "4420", 00:24:02.359 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:02.359 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:02.359 "hdgst": false, 00:24:02.359 "ddgst": false 00:24:02.359 }, 00:24:02.359 "method": "bdev_nvme_attach_controller" 00:24:02.359 },{ 00:24:02.359 "params": { 00:24:02.359 "name": "Nvme4", 00:24:02.359 "trtype": "tcp", 00:24:02.359 "traddr": "10.0.0.2", 00:24:02.359 "adrfam": "ipv4", 00:24:02.359 "trsvcid": "4420", 00:24:02.359 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:02.359 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:02.359 "hdgst": false, 00:24:02.359 "ddgst": false 00:24:02.359 }, 00:24:02.359 "method": "bdev_nvme_attach_controller" 00:24:02.359 },{ 00:24:02.359 "params": { 00:24:02.359 "name": "Nvme5", 00:24:02.359 "trtype": "tcp", 00:24:02.359 "traddr": "10.0.0.2", 00:24:02.359 "adrfam": "ipv4", 00:24:02.359 "trsvcid": "4420", 00:24:02.359 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:02.359 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:02.359 "hdgst": false, 00:24:02.359 "ddgst": false 00:24:02.359 }, 00:24:02.359 "method": "bdev_nvme_attach_controller" 00:24:02.359 },{ 00:24:02.359 "params": { 00:24:02.359 "name": "Nvme6", 00:24:02.359 "trtype": "tcp", 00:24:02.359 "traddr": "10.0.0.2", 00:24:02.359 "adrfam": "ipv4", 00:24:02.359 "trsvcid": "4420", 00:24:02.359 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:02.359 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:02.359 "hdgst": false, 00:24:02.359 "ddgst": false 00:24:02.359 }, 00:24:02.359 "method": "bdev_nvme_attach_controller" 00:24:02.359 },{ 00:24:02.359 "params": { 00:24:02.359 "name": "Nvme7", 00:24:02.359 "trtype": "tcp", 00:24:02.359 "traddr": "10.0.0.2", 00:24:02.359 "adrfam": "ipv4", 00:24:02.359 "trsvcid": "4420", 00:24:02.359 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:02.359 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:02.359 "hdgst": false, 00:24:02.359 "ddgst": false 00:24:02.359 }, 00:24:02.359 "method": "bdev_nvme_attach_controller" 00:24:02.359 },{ 00:24:02.359 "params": { 00:24:02.359 "name": "Nvme8", 00:24:02.359 "trtype": "tcp", 00:24:02.359 "traddr": "10.0.0.2", 00:24:02.359 "adrfam": "ipv4", 00:24:02.359 "trsvcid": "4420", 00:24:02.359 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:02.359 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:02.359 "hdgst": false, 00:24:02.359 "ddgst": false 00:24:02.359 }, 00:24:02.359 "method": "bdev_nvme_attach_controller" 00:24:02.359 },{ 00:24:02.359 "params": { 00:24:02.359 "name": "Nvme9", 00:24:02.359 "trtype": "tcp", 00:24:02.359 "traddr": "10.0.0.2", 00:24:02.359 "adrfam": "ipv4", 00:24:02.359 "trsvcid": "4420", 00:24:02.359 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:02.359 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:02.359 "hdgst": false, 00:24:02.359 "ddgst": false 00:24:02.359 }, 00:24:02.359 "method": "bdev_nvme_attach_controller" 00:24:02.359 },{ 00:24:02.359 "params": { 00:24:02.359 "name": "Nvme10", 00:24:02.359 "trtype": "tcp", 00:24:02.359 "traddr": "10.0.0.2", 00:24:02.359 "adrfam": "ipv4", 00:24:02.359 "trsvcid": "4420", 00:24:02.359 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:02.359 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:02.359 "hdgst": false, 00:24:02.359 "ddgst": false 00:24:02.359 }, 00:24:02.359 "method": "bdev_nvme_attach_controller" 00:24:02.359 }' 00:24:02.359 [2024-04-26 08:58:19.408112] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.359 [2024-04-26 08:58:19.474299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.737 08:58:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:03.737 08:58:20 -- common/autotest_common.sh@850 -- # return 0 00:24:03.737 08:58:20 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:03.737 08:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.737 08:58:20 -- common/autotest_common.sh@10 -- # set +x 00:24:03.737 08:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.737 08:58:20 -- target/shutdown.sh@83 -- # kill -9 2142657 00:24:03.737 08:58:20 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:03.737 08:58:20 -- target/shutdown.sh@87 -- # sleep 1 00:24:04.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2142657 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:04.676 08:58:21 -- target/shutdown.sh@88 -- # kill -0 2142373 00:24:04.676 08:58:21 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:04.676 08:58:21 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:04.676 08:58:21 -- nvmf/common.sh@521 -- # config=() 00:24:04.676 08:58:21 -- nvmf/common.sh@521 -- # local subsystem config 00:24:04.676 08:58:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:04.676 08:58:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:04.676 { 00:24:04.676 "params": { 00:24:04.676 "name": "Nvme$subsystem", 00:24:04.676 "trtype": "$TEST_TRANSPORT", 00:24:04.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.676 "adrfam": "ipv4", 00:24:04.676 "trsvcid": "$NVMF_PORT", 00:24:04.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.676 "hdgst": ${hdgst:-false}, 00:24:04.676 "ddgst": ${ddgst:-false} 00:24:04.676 }, 00:24:04.676 "method": "bdev_nvme_attach_controller" 00:24:04.676 } 00:24:04.676 EOF 00:24:04.676 )") 00:24:04.676 08:58:21 -- nvmf/common.sh@543 -- # cat 00:24:04.676 08:58:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:04.676 08:58:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:04.676 { 00:24:04.676 "params": { 00:24:04.676 "name": "Nvme$subsystem", 00:24:04.676 "trtype": "$TEST_TRANSPORT", 00:24:04.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.676 "adrfam": "ipv4", 00:24:04.676 "trsvcid": "$NVMF_PORT", 00:24:04.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.676 "hdgst": ${hdgst:-false}, 00:24:04.676 "ddgst": ${ddgst:-false} 00:24:04.676 }, 00:24:04.676 "method": "bdev_nvme_attach_controller" 00:24:04.676 } 00:24:04.676 EOF 00:24:04.676 )") 00:24:04.676 08:58:21 -- nvmf/common.sh@543 -- # cat 00:24:04.676 08:58:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:04.676 08:58:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:04.676 { 00:24:04.676 "params": { 00:24:04.676 "name": "Nvme$subsystem", 00:24:04.676 "trtype": "$TEST_TRANSPORT", 00:24:04.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.676 "adrfam": "ipv4", 00:24:04.676 "trsvcid": "$NVMF_PORT", 00:24:04.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.676 "hdgst": ${hdgst:-false}, 00:24:04.676 "ddgst": ${ddgst:-false} 00:24:04.676 }, 00:24:04.676 "method": "bdev_nvme_attach_controller" 00:24:04.676 } 00:24:04.676 EOF 00:24:04.676 )") 00:24:04.676 08:58:21 -- nvmf/common.sh@543 -- # cat 00:24:04.676 08:58:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:04.676 08:58:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:04.676 { 00:24:04.676 "params": { 00:24:04.676 "name": "Nvme$subsystem", 00:24:04.676 "trtype": "$TEST_TRANSPORT", 00:24:04.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.676 "adrfam": "ipv4", 00:24:04.676 "trsvcid": "$NVMF_PORT", 00:24:04.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.676 "hdgst": ${hdgst:-false}, 00:24:04.676 "ddgst": ${ddgst:-false} 00:24:04.677 }, 00:24:04.677 "method": "bdev_nvme_attach_controller" 00:24:04.677 } 00:24:04.677 EOF 00:24:04.677 )") 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # cat 00:24:04.677 08:58:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:04.677 { 00:24:04.677 "params": { 00:24:04.677 "name": "Nvme$subsystem", 00:24:04.677 "trtype": "$TEST_TRANSPORT", 00:24:04.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.677 "adrfam": "ipv4", 00:24:04.677 "trsvcid": "$NVMF_PORT", 00:24:04.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.677 "hdgst": ${hdgst:-false}, 00:24:04.677 "ddgst": ${ddgst:-false} 00:24:04.677 }, 00:24:04.677 "method": "bdev_nvme_attach_controller" 00:24:04.677 } 00:24:04.677 EOF 00:24:04.677 )") 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # cat 00:24:04.677 08:58:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:04.677 { 00:24:04.677 "params": { 00:24:04.677 "name": "Nvme$subsystem", 00:24:04.677 "trtype": "$TEST_TRANSPORT", 00:24:04.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.677 "adrfam": "ipv4", 00:24:04.677 "trsvcid": "$NVMF_PORT", 00:24:04.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.677 "hdgst": ${hdgst:-false}, 00:24:04.677 "ddgst": ${ddgst:-false} 00:24:04.677 }, 00:24:04.677 "method": "bdev_nvme_attach_controller" 00:24:04.677 } 00:24:04.677 EOF 00:24:04.677 )") 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # cat 00:24:04.677 [2024-04-26 08:58:21.891289] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:24:04.677 [2024-04-26 08:58:21.891345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143000 ] 00:24:04.677 08:58:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:04.677 { 00:24:04.677 "params": { 00:24:04.677 "name": "Nvme$subsystem", 00:24:04.677 "trtype": "$TEST_TRANSPORT", 00:24:04.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.677 "adrfam": "ipv4", 00:24:04.677 "trsvcid": "$NVMF_PORT", 00:24:04.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.677 "hdgst": ${hdgst:-false}, 00:24:04.677 "ddgst": ${ddgst:-false} 00:24:04.677 }, 00:24:04.677 "method": "bdev_nvme_attach_controller" 00:24:04.677 } 00:24:04.677 EOF 00:24:04.677 )") 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # cat 00:24:04.677 08:58:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:04.677 { 00:24:04.677 "params": { 00:24:04.677 "name": "Nvme$subsystem", 00:24:04.677 "trtype": "$TEST_TRANSPORT", 00:24:04.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.677 "adrfam": "ipv4", 00:24:04.677 "trsvcid": "$NVMF_PORT", 00:24:04.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.677 "hdgst": ${hdgst:-false}, 00:24:04.677 "ddgst": ${ddgst:-false} 00:24:04.677 }, 00:24:04.677 "method": "bdev_nvme_attach_controller" 00:24:04.677 } 00:24:04.677 EOF 00:24:04.677 )") 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # cat 00:24:04.677 08:58:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:04.677 { 00:24:04.677 "params": { 00:24:04.677 "name": "Nvme$subsystem", 00:24:04.677 "trtype": "$TEST_TRANSPORT", 00:24:04.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.677 "adrfam": "ipv4", 00:24:04.677 "trsvcid": "$NVMF_PORT", 00:24:04.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.677 "hdgst": ${hdgst:-false}, 00:24:04.677 "ddgst": ${ddgst:-false} 00:24:04.677 }, 00:24:04.677 "method": "bdev_nvme_attach_controller" 00:24:04.677 } 00:24:04.677 EOF 00:24:04.677 )") 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # cat 00:24:04.677 08:58:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:04.677 { 00:24:04.677 "params": { 00:24:04.677 "name": "Nvme$subsystem", 00:24:04.677 "trtype": "$TEST_TRANSPORT", 00:24:04.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.677 "adrfam": "ipv4", 00:24:04.677 "trsvcid": "$NVMF_PORT", 00:24:04.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.677 "hdgst": ${hdgst:-false}, 00:24:04.677 "ddgst": ${ddgst:-false} 00:24:04.677 }, 00:24:04.677 "method": "bdev_nvme_attach_controller" 00:24:04.677 } 00:24:04.677 EOF 00:24:04.677 )") 00:24:04.677 08:58:21 -- nvmf/common.sh@543 -- # cat 00:24:04.937 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.937 08:58:21 -- nvmf/common.sh@545 -- # jq . 00:24:04.937 08:58:21 -- nvmf/common.sh@546 -- # IFS=, 00:24:04.937 08:58:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:04.937 "params": { 00:24:04.937 "name": "Nvme1", 00:24:04.937 "trtype": "tcp", 00:24:04.937 "traddr": "10.0.0.2", 00:24:04.937 "adrfam": "ipv4", 00:24:04.937 "trsvcid": "4420", 00:24:04.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.937 "hdgst": false, 00:24:04.937 "ddgst": false 00:24:04.937 }, 00:24:04.937 "method": "bdev_nvme_attach_controller" 00:24:04.937 },{ 00:24:04.937 "params": { 00:24:04.937 "name": "Nvme2", 00:24:04.937 "trtype": "tcp", 00:24:04.937 "traddr": "10.0.0.2", 00:24:04.937 "adrfam": "ipv4", 00:24:04.937 "trsvcid": "4420", 00:24:04.937 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:04.937 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:04.937 "hdgst": false, 00:24:04.937 "ddgst": false 00:24:04.937 }, 00:24:04.937 "method": "bdev_nvme_attach_controller" 00:24:04.937 },{ 00:24:04.937 "params": { 00:24:04.937 "name": "Nvme3", 00:24:04.937 "trtype": "tcp", 00:24:04.937 "traddr": "10.0.0.2", 00:24:04.937 "adrfam": "ipv4", 00:24:04.937 "trsvcid": "4420", 00:24:04.937 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:04.937 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:04.937 "hdgst": false, 00:24:04.937 "ddgst": false 00:24:04.937 }, 00:24:04.937 "method": "bdev_nvme_attach_controller" 00:24:04.937 },{ 00:24:04.937 "params": { 00:24:04.937 "name": "Nvme4", 00:24:04.937 "trtype": "tcp", 00:24:04.937 "traddr": "10.0.0.2", 00:24:04.937 "adrfam": "ipv4", 00:24:04.937 "trsvcid": "4420", 00:24:04.937 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:04.937 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:04.937 "hdgst": false, 00:24:04.937 "ddgst": false 00:24:04.937 }, 00:24:04.937 "method": "bdev_nvme_attach_controller" 00:24:04.937 },{ 00:24:04.937 "params": { 00:24:04.937 "name": "Nvme5", 00:24:04.937 "trtype": "tcp", 00:24:04.937 "traddr": "10.0.0.2", 00:24:04.937 "adrfam": "ipv4", 00:24:04.937 "trsvcid": "4420", 00:24:04.937 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:04.937 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:04.937 "hdgst": false, 00:24:04.937 "ddgst": false 00:24:04.937 }, 00:24:04.937 "method": "bdev_nvme_attach_controller" 00:24:04.937 },{ 00:24:04.937 "params": { 00:24:04.937 "name": "Nvme6", 00:24:04.937 "trtype": "tcp", 00:24:04.937 "traddr": "10.0.0.2", 00:24:04.937 "adrfam": "ipv4", 00:24:04.937 "trsvcid": "4420", 00:24:04.937 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:04.937 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:04.937 "hdgst": false, 00:24:04.937 "ddgst": false 00:24:04.937 }, 00:24:04.937 "method": "bdev_nvme_attach_controller" 00:24:04.937 },{ 00:24:04.937 "params": { 00:24:04.937 "name": "Nvme7", 00:24:04.937 "trtype": "tcp", 00:24:04.937 "traddr": "10.0.0.2", 00:24:04.937 "adrfam": "ipv4", 00:24:04.937 "trsvcid": "4420", 00:24:04.937 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:04.937 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:04.937 "hdgst": false, 00:24:04.937 "ddgst": false 00:24:04.937 }, 00:24:04.937 "method": "bdev_nvme_attach_controller" 00:24:04.937 },{ 00:24:04.937 "params": { 00:24:04.938 "name": "Nvme8", 00:24:04.938 "trtype": "tcp", 00:24:04.938 "traddr": "10.0.0.2", 00:24:04.938 "adrfam": "ipv4", 00:24:04.938 "trsvcid": "4420", 00:24:04.938 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:04.938 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:04.938 "hdgst": false, 00:24:04.938 "ddgst": false 00:24:04.938 }, 00:24:04.938 "method": "bdev_nvme_attach_controller" 00:24:04.938 },{ 00:24:04.938 "params": { 00:24:04.938 "name": "Nvme9", 00:24:04.938 "trtype": "tcp", 00:24:04.938 "traddr": "10.0.0.2", 00:24:04.938 "adrfam": "ipv4", 00:24:04.938 "trsvcid": "4420", 00:24:04.938 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:04.938 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:04.938 "hdgst": false, 00:24:04.938 "ddgst": false 00:24:04.938 }, 00:24:04.938 "method": "bdev_nvme_attach_controller" 00:24:04.938 },{ 00:24:04.938 "params": { 00:24:04.938 "name": "Nvme10", 00:24:04.938 "trtype": "tcp", 00:24:04.938 "traddr": "10.0.0.2", 00:24:04.938 "adrfam": "ipv4", 00:24:04.938 "trsvcid": "4420", 00:24:04.938 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:04.938 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:04.938 "hdgst": false, 00:24:04.938 "ddgst": false 00:24:04.938 }, 00:24:04.938 "method": "bdev_nvme_attach_controller" 00:24:04.938 }' 00:24:04.938 [2024-04-26 08:58:21.966054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.938 [2024-04-26 08:58:22.033909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.844 Running I/O for 1 seconds... 00:24:07.784 00:24:07.784 Latency(us) 00:24:07.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.784 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.784 Verification LBA range: start 0x0 length 0x400 00:24:07.784 Nvme1n1 : 1.08 237.35 14.83 0.00 0.00 267302.91 22229.81 233203.30 00:24:07.784 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.784 Verification LBA range: start 0x0 length 0x400 00:24:07.784 Nvme2n1 : 1.12 229.40 14.34 0.00 0.00 273027.69 21705.52 265080.01 00:24:07.784 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.784 Verification LBA range: start 0x0 length 0x400 00:24:07.784 Nvme3n1 : 1.08 236.61 14.79 0.00 0.00 260787.00 21286.09 243269.63 00:24:07.784 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.784 Verification LBA range: start 0x0 length 0x400 00:24:07.784 Nvme4n1 : 1.11 230.41 14.40 0.00 0.00 263516.16 20971.52 244947.35 00:24:07.784 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.784 Verification LBA range: start 0x0 length 0x400 00:24:07.784 Nvme5n1 : 1.07 300.40 18.77 0.00 0.00 199245.82 18559.80 208037.48 00:24:07.784 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.784 Verification LBA range: start 0x0 length 0x400 00:24:07.784 Nvme6n1 : 1.09 235.82 14.74 0.00 0.00 250244.71 33973.86 229847.86 00:24:07.784 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.784 Verification LBA range: start 0x0 length 0x400 00:24:07.784 Nvme7n1 : 1.16 332.27 20.77 0.00 0.00 176021.37 19084.08 208037.48 00:24:07.784 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.784 Verification LBA range: start 0x0 length 0x400 00:24:07.784 Nvme8n1 : 1.18 270.41 16.90 0.00 0.00 206062.22 20552.09 218103.81 00:24:07.784 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.784 Verification LBA range: start 0x0 length 0x400 00:24:07.784 Nvme9n1 : 1.17 327.22 20.45 0.00 0.00 174140.42 12320.77 228170.14 00:24:07.784 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:07.784 Verification LBA range: start 0x0 length 0x400 00:24:07.784 Nvme10n1 : 1.18 325.76 20.36 0.00 0.00 172239.39 9699.33 196293.43 00:24:07.784 =================================================================================================================== 00:24:07.784 Total : 2725.64 170.35 0.00 0.00 217092.94 9699.33 265080.01 00:24:07.784 08:58:24 -- target/shutdown.sh@94 -- # stoptarget 00:24:07.784 08:58:24 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:07.784 08:58:24 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:07.784 08:58:24 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:07.784 08:58:24 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:07.784 08:58:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:07.784 08:58:24 -- nvmf/common.sh@117 -- # sync 00:24:07.784 08:58:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:07.784 08:58:25 -- nvmf/common.sh@120 -- # set +e 00:24:07.784 08:58:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.784 08:58:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:07.784 rmmod nvme_tcp 00:24:07.784 rmmod nvme_fabrics 00:24:08.043 rmmod nvme_keyring 00:24:08.043 08:58:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.043 08:58:25 -- nvmf/common.sh@124 -- # set -e 00:24:08.043 08:58:25 -- nvmf/common.sh@125 -- # return 0 00:24:08.043 08:58:25 -- nvmf/common.sh@478 -- # '[' -n 2142373 ']' 00:24:08.043 08:58:25 -- nvmf/common.sh@479 -- # killprocess 2142373 00:24:08.043 08:58:25 -- common/autotest_common.sh@936 -- # '[' -z 2142373 ']' 00:24:08.043 08:58:25 -- common/autotest_common.sh@940 -- # kill -0 2142373 00:24:08.043 08:58:25 -- common/autotest_common.sh@941 -- # uname 00:24:08.043 08:58:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:08.043 08:58:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2142373 00:24:08.043 08:58:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:08.043 08:58:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:08.043 08:58:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2142373' 00:24:08.043 killing process with pid 2142373 00:24:08.043 08:58:25 -- common/autotest_common.sh@955 -- # kill 2142373 00:24:08.044 08:58:25 -- common/autotest_common.sh@960 -- # wait 2142373 00:24:08.303 08:58:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:08.303 08:58:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:08.303 08:58:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:08.303 08:58:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.303 08:58:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.303 08:58:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.303 08:58:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.303 08:58:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.848 08:58:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:10.848 00:24:10.848 real 0m16.541s 00:24:10.848 user 0m35.333s 00:24:10.848 sys 0m6.931s 00:24:10.848 08:58:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:10.848 08:58:27 -- common/autotest_common.sh@10 -- # set +x 00:24:10.848 ************************************ 00:24:10.848 END TEST nvmf_shutdown_tc1 00:24:10.848 ************************************ 00:24:10.848 08:58:27 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:10.848 08:58:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:10.848 08:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:10.848 08:58:27 -- common/autotest_common.sh@10 -- # set +x 00:24:10.848 ************************************ 00:24:10.848 START TEST nvmf_shutdown_tc2 00:24:10.848 ************************************ 00:24:10.848 08:58:27 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:24:10.848 08:58:27 -- target/shutdown.sh@99 -- # starttarget 00:24:10.848 08:58:27 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:10.848 08:58:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:10.848 08:58:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.848 08:58:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:10.848 08:58:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:10.848 08:58:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:10.848 08:58:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.848 08:58:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.848 08:58:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.848 08:58:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:10.848 08:58:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:10.848 08:58:27 -- common/autotest_common.sh@10 -- # set +x 00:24:10.848 08:58:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:10.848 08:58:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.848 08:58:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.848 08:58:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.848 08:58:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.848 08:58:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.848 08:58:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.848 08:58:27 -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.848 08:58:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.848 08:58:27 -- nvmf/common.sh@296 -- # e810=() 00:24:10.848 08:58:27 -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.848 08:58:27 -- nvmf/common.sh@297 -- # x722=() 00:24:10.848 08:58:27 -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.848 08:58:27 -- nvmf/common.sh@298 -- # mlx=() 00:24:10.848 08:58:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.848 08:58:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.848 08:58:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.848 08:58:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.848 08:58:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.848 08:58:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.848 08:58:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.848 08:58:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.848 08:58:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.848 08:58:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.848 08:58:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.848 08:58:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.848 08:58:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.848 08:58:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.848 08:58:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.848 08:58:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.848 08:58:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:10.848 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:10.848 08:58:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.848 08:58:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:10.848 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:10.848 08:58:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.848 08:58:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.848 08:58:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.848 08:58:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.848 08:58:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:10.848 08:58:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.849 08:58:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:10.849 Found net devices under 0000:af:00.0: cvl_0_0 00:24:10.849 08:58:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.849 08:58:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.849 08:58:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.849 08:58:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:10.849 08:58:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.849 08:58:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:10.849 Found net devices under 0000:af:00.1: cvl_0_1 00:24:10.849 08:58:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.849 08:58:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:10.849 08:58:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:10.849 08:58:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:10.849 08:58:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:10.849 08:58:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:10.849 08:58:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.849 08:58:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.849 08:58:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.849 08:58:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.849 08:58:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.849 08:58:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.849 08:58:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.849 08:58:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.849 08:58:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.849 08:58:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.849 08:58:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.849 08:58:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.849 08:58:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.849 08:58:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.849 08:58:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.849 08:58:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.849 08:58:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.113 08:58:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.113 08:58:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.113 08:58:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:11.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:24:11.113 00:24:11.113 --- 10.0.0.2 ping statistics --- 00:24:11.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.113 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:24:11.113 08:58:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:24:11.113 00:24:11.113 --- 10.0.0.1 ping statistics --- 00:24:11.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.113 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:24:11.113 08:58:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.113 08:58:28 -- nvmf/common.sh@411 -- # return 0 00:24:11.113 08:58:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:11.114 08:58:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.114 08:58:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:11.114 08:58:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:11.114 08:58:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.114 08:58:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:11.114 08:58:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:11.114 08:58:28 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:11.114 08:58:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:11.114 08:58:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:11.114 08:58:28 -- common/autotest_common.sh@10 -- # set +x 00:24:11.114 08:58:28 -- nvmf/common.sh@470 -- # nvmfpid=2144175 00:24:11.114 08:58:28 -- nvmf/common.sh@471 -- # waitforlisten 2144175 00:24:11.114 08:58:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:11.114 08:58:28 -- common/autotest_common.sh@817 -- # '[' -z 2144175 ']' 00:24:11.114 08:58:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.114 08:58:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:11.114 08:58:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.114 08:58:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:11.114 08:58:28 -- common/autotest_common.sh@10 -- # set +x 00:24:11.114 [2024-04-26 08:58:28.265035] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:24:11.114 [2024-04-26 08:58:28.265084] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.114 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.114 [2024-04-26 08:58:28.341262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.373 [2024-04-26 08:58:28.415101] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.373 [2024-04-26 08:58:28.415141] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.373 [2024-04-26 08:58:28.415152] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.373 [2024-04-26 08:58:28.415161] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.373 [2024-04-26 08:58:28.415168] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.373 [2024-04-26 08:58:28.415228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.373 [2024-04-26 08:58:28.415311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.373 [2024-04-26 08:58:28.415421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.373 [2024-04-26 08:58:28.415422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:11.943 08:58:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:11.943 08:58:29 -- common/autotest_common.sh@850 -- # return 0 00:24:11.943 08:58:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:11.943 08:58:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:11.943 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:24:11.943 08:58:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.943 08:58:29 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:11.943 08:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.943 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:24:11.943 [2024-04-26 08:58:29.118288] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.943 08:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.943 08:58:29 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:11.943 08:58:29 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:11.943 08:58:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:11.943 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:24:11.943 08:58:29 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:11.943 08:58:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.943 08:58:29 -- target/shutdown.sh@28 -- # cat 00:24:11.943 08:58:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.943 08:58:29 -- target/shutdown.sh@28 -- # cat 00:24:11.943 08:58:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.943 08:58:29 -- target/shutdown.sh@28 -- # cat 00:24:11.943 08:58:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.943 08:58:29 -- target/shutdown.sh@28 -- # cat 00:24:11.943 08:58:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.943 08:58:29 -- target/shutdown.sh@28 -- # cat 00:24:11.943 08:58:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.943 08:58:29 -- target/shutdown.sh@28 -- # cat 00:24:11.943 08:58:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.943 08:58:29 -- target/shutdown.sh@28 -- # cat 00:24:11.943 08:58:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.943 08:58:29 -- target/shutdown.sh@28 -- # cat 00:24:11.943 08:58:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.943 08:58:29 -- target/shutdown.sh@28 -- # cat 00:24:11.943 08:58:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.943 08:58:29 -- target/shutdown.sh@28 -- # cat 00:24:11.943 08:58:29 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:12.202 08:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.202 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.202 Malloc1 00:24:12.202 [2024-04-26 08:58:29.228980] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.202 Malloc2 00:24:12.202 Malloc3 00:24:12.202 Malloc4 00:24:12.202 Malloc5 00:24:12.202 Malloc6 00:24:12.463 Malloc7 00:24:12.463 Malloc8 00:24:12.463 Malloc9 00:24:12.463 Malloc10 00:24:12.463 08:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.463 08:58:29 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:12.463 08:58:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:12.463 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.463 08:58:29 -- target/shutdown.sh@103 -- # perfpid=2144493 00:24:12.463 08:58:29 -- target/shutdown.sh@104 -- # waitforlisten 2144493 /var/tmp/bdevperf.sock 00:24:12.463 08:58:29 -- common/autotest_common.sh@817 -- # '[' -z 2144493 ']' 00:24:12.463 08:58:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.463 08:58:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:12.463 08:58:29 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:12.463 08:58:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.463 08:58:29 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:12.463 08:58:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:12.463 08:58:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.463 08:58:29 -- nvmf/common.sh@521 -- # config=() 00:24:12.463 08:58:29 -- nvmf/common.sh@521 -- # local subsystem config 00:24:12.463 08:58:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:12.463 08:58:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:12.463 { 00:24:12.463 "params": { 00:24:12.463 "name": "Nvme$subsystem", 00:24:12.463 "trtype": "$TEST_TRANSPORT", 00:24:12.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.463 "adrfam": "ipv4", 00:24:12.463 "trsvcid": "$NVMF_PORT", 00:24:12.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.463 "hdgst": ${hdgst:-false}, 00:24:12.463 "ddgst": ${ddgst:-false} 00:24:12.463 }, 00:24:12.463 "method": "bdev_nvme_attach_controller" 00:24:12.463 } 00:24:12.463 EOF 00:24:12.463 )") 00:24:12.463 08:58:29 -- nvmf/common.sh@543 -- # cat 00:24:12.463 08:58:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:12.463 08:58:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:12.463 { 00:24:12.463 "params": { 00:24:12.463 "name": "Nvme$subsystem", 00:24:12.463 "trtype": "$TEST_TRANSPORT", 00:24:12.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.463 "adrfam": "ipv4", 00:24:12.463 "trsvcid": "$NVMF_PORT", 00:24:12.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.463 "hdgst": ${hdgst:-false}, 00:24:12.463 "ddgst": ${ddgst:-false} 00:24:12.463 }, 00:24:12.463 "method": "bdev_nvme_attach_controller" 00:24:12.463 } 00:24:12.463 EOF 00:24:12.463 )") 00:24:12.463 08:58:29 -- nvmf/common.sh@543 -- # cat 00:24:12.463 08:58:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:12.463 08:58:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:12.463 { 00:24:12.463 "params": { 00:24:12.463 "name": "Nvme$subsystem", 00:24:12.463 "trtype": "$TEST_TRANSPORT", 00:24:12.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.463 "adrfam": "ipv4", 00:24:12.463 "trsvcid": "$NVMF_PORT", 00:24:12.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.463 "hdgst": ${hdgst:-false}, 00:24:12.463 "ddgst": ${ddgst:-false} 00:24:12.463 }, 00:24:12.463 "method": "bdev_nvme_attach_controller" 00:24:12.463 } 00:24:12.463 EOF 00:24:12.463 )") 00:24:12.463 08:58:29 -- nvmf/common.sh@543 -- # cat 00:24:12.463 08:58:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:12.463 08:58:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:12.463 { 00:24:12.463 "params": { 00:24:12.463 "name": "Nvme$subsystem", 00:24:12.463 "trtype": "$TEST_TRANSPORT", 00:24:12.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.463 "adrfam": "ipv4", 00:24:12.463 "trsvcid": "$NVMF_PORT", 00:24:12.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.463 "hdgst": ${hdgst:-false}, 00:24:12.463 "ddgst": ${ddgst:-false} 00:24:12.463 }, 00:24:12.463 "method": "bdev_nvme_attach_controller" 00:24:12.463 } 00:24:12.463 EOF 00:24:12.463 )") 00:24:12.463 08:58:29 -- nvmf/common.sh@543 -- # cat 00:24:12.463 08:58:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:12.463 08:58:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:12.463 { 00:24:12.463 "params": { 00:24:12.463 "name": "Nvme$subsystem", 00:24:12.463 "trtype": "$TEST_TRANSPORT", 00:24:12.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.463 "adrfam": "ipv4", 00:24:12.463 "trsvcid": "$NVMF_PORT", 00:24:12.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.463 "hdgst": ${hdgst:-false}, 00:24:12.463 "ddgst": ${ddgst:-false} 00:24:12.463 }, 00:24:12.463 "method": "bdev_nvme_attach_controller" 00:24:12.463 } 00:24:12.463 EOF 00:24:12.463 )") 00:24:12.463 08:58:29 -- nvmf/common.sh@543 -- # cat 00:24:12.464 08:58:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:12.464 08:58:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:12.464 { 00:24:12.464 "params": { 00:24:12.464 "name": "Nvme$subsystem", 00:24:12.464 "trtype": "$TEST_TRANSPORT", 00:24:12.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.464 "adrfam": "ipv4", 00:24:12.464 "trsvcid": "$NVMF_PORT", 00:24:12.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.464 "hdgst": ${hdgst:-false}, 00:24:12.464 "ddgst": ${ddgst:-false} 00:24:12.464 }, 00:24:12.464 "method": "bdev_nvme_attach_controller" 00:24:12.464 } 00:24:12.464 EOF 00:24:12.464 )") 00:24:12.464 [2024-04-26 08:58:29.708483] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:24:12.464 [2024-04-26 08:58:29.708535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144493 ] 00:24:12.464 08:58:29 -- nvmf/common.sh@543 -- # cat 00:24:12.724 08:58:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:12.724 08:58:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:12.724 { 00:24:12.724 "params": { 00:24:12.724 "name": "Nvme$subsystem", 00:24:12.724 "trtype": "$TEST_TRANSPORT", 00:24:12.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.724 "adrfam": "ipv4", 00:24:12.724 "trsvcid": "$NVMF_PORT", 00:24:12.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.724 "hdgst": ${hdgst:-false}, 00:24:12.724 "ddgst": ${ddgst:-false} 00:24:12.724 }, 00:24:12.724 "method": "bdev_nvme_attach_controller" 00:24:12.724 } 00:24:12.724 EOF 00:24:12.724 )") 00:24:12.724 08:58:29 -- nvmf/common.sh@543 -- # cat 00:24:12.724 08:58:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:12.724 08:58:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:12.724 { 00:24:12.724 "params": { 00:24:12.724 "name": "Nvme$subsystem", 00:24:12.724 "trtype": "$TEST_TRANSPORT", 00:24:12.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.724 "adrfam": "ipv4", 00:24:12.724 "trsvcid": "$NVMF_PORT", 00:24:12.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.724 "hdgst": ${hdgst:-false}, 00:24:12.724 "ddgst": ${ddgst:-false} 00:24:12.724 }, 00:24:12.724 "method": "bdev_nvme_attach_controller" 00:24:12.724 } 00:24:12.724 EOF 00:24:12.724 )") 00:24:12.724 08:58:29 -- nvmf/common.sh@543 -- # cat 00:24:12.724 08:58:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:12.724 08:58:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:12.724 { 00:24:12.724 "params": { 00:24:12.724 "name": "Nvme$subsystem", 00:24:12.724 "trtype": "$TEST_TRANSPORT", 00:24:12.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.724 "adrfam": "ipv4", 00:24:12.724 "trsvcid": "$NVMF_PORT", 00:24:12.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.724 "hdgst": ${hdgst:-false}, 00:24:12.724 "ddgst": ${ddgst:-false} 00:24:12.724 }, 00:24:12.724 "method": "bdev_nvme_attach_controller" 00:24:12.724 } 00:24:12.724 EOF 00:24:12.724 )") 00:24:12.724 08:58:29 -- nvmf/common.sh@543 -- # cat 00:24:12.724 08:58:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:12.724 08:58:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:12.724 { 00:24:12.724 "params": { 00:24:12.724 "name": "Nvme$subsystem", 00:24:12.724 "trtype": "$TEST_TRANSPORT", 00:24:12.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.724 "adrfam": "ipv4", 00:24:12.724 "trsvcid": "$NVMF_PORT", 00:24:12.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.725 "hdgst": ${hdgst:-false}, 00:24:12.725 "ddgst": ${ddgst:-false} 00:24:12.725 }, 00:24:12.725 "method": "bdev_nvme_attach_controller" 00:24:12.725 } 00:24:12.725 EOF 00:24:12.725 )") 00:24:12.725 08:58:29 -- nvmf/common.sh@543 -- # cat 00:24:12.725 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.725 08:58:29 -- nvmf/common.sh@545 -- # jq . 00:24:12.725 08:58:29 -- nvmf/common.sh@546 -- # IFS=, 00:24:12.725 08:58:29 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:12.725 "params": { 00:24:12.725 "name": "Nvme1", 00:24:12.725 "trtype": "tcp", 00:24:12.725 "traddr": "10.0.0.2", 00:24:12.725 "adrfam": "ipv4", 00:24:12.725 "trsvcid": "4420", 00:24:12.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.725 "hdgst": false, 00:24:12.725 "ddgst": false 00:24:12.725 }, 00:24:12.725 "method": "bdev_nvme_attach_controller" 00:24:12.725 },{ 00:24:12.725 "params": { 00:24:12.725 "name": "Nvme2", 00:24:12.725 "trtype": "tcp", 00:24:12.725 "traddr": "10.0.0.2", 00:24:12.725 "adrfam": "ipv4", 00:24:12.725 "trsvcid": "4420", 00:24:12.725 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:12.725 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:12.725 "hdgst": false, 00:24:12.725 "ddgst": false 00:24:12.725 }, 00:24:12.725 "method": "bdev_nvme_attach_controller" 00:24:12.725 },{ 00:24:12.725 "params": { 00:24:12.725 "name": "Nvme3", 00:24:12.725 "trtype": "tcp", 00:24:12.725 "traddr": "10.0.0.2", 00:24:12.725 "adrfam": "ipv4", 00:24:12.725 "trsvcid": "4420", 00:24:12.725 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:12.725 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:12.725 "hdgst": false, 00:24:12.725 "ddgst": false 00:24:12.725 }, 00:24:12.725 "method": "bdev_nvme_attach_controller" 00:24:12.725 },{ 00:24:12.725 "params": { 00:24:12.725 "name": "Nvme4", 00:24:12.725 "trtype": "tcp", 00:24:12.725 "traddr": "10.0.0.2", 00:24:12.725 "adrfam": "ipv4", 00:24:12.725 "trsvcid": "4420", 00:24:12.725 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:12.725 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:12.725 "hdgst": false, 00:24:12.725 "ddgst": false 00:24:12.725 }, 00:24:12.725 "method": "bdev_nvme_attach_controller" 00:24:12.725 },{ 00:24:12.725 "params": { 00:24:12.725 "name": "Nvme5", 00:24:12.725 "trtype": "tcp", 00:24:12.725 "traddr": "10.0.0.2", 00:24:12.725 "adrfam": "ipv4", 00:24:12.725 "trsvcid": "4420", 00:24:12.725 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:12.725 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:12.725 "hdgst": false, 00:24:12.725 "ddgst": false 00:24:12.725 }, 00:24:12.725 "method": "bdev_nvme_attach_controller" 00:24:12.725 },{ 00:24:12.725 "params": { 00:24:12.725 "name": "Nvme6", 00:24:12.725 "trtype": "tcp", 00:24:12.725 "traddr": "10.0.0.2", 00:24:12.725 "adrfam": "ipv4", 00:24:12.725 "trsvcid": "4420", 00:24:12.725 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:12.725 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:12.725 "hdgst": false, 00:24:12.725 "ddgst": false 00:24:12.725 }, 00:24:12.725 "method": "bdev_nvme_attach_controller" 00:24:12.725 },{ 00:24:12.725 "params": { 00:24:12.725 "name": "Nvme7", 00:24:12.725 "trtype": "tcp", 00:24:12.725 "traddr": "10.0.0.2", 00:24:12.725 "adrfam": "ipv4", 00:24:12.725 "trsvcid": "4420", 00:24:12.725 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:12.725 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:12.725 "hdgst": false, 00:24:12.725 "ddgst": false 00:24:12.725 }, 00:24:12.725 "method": "bdev_nvme_attach_controller" 00:24:12.725 },{ 00:24:12.725 "params": { 00:24:12.725 "name": "Nvme8", 00:24:12.725 "trtype": "tcp", 00:24:12.725 "traddr": "10.0.0.2", 00:24:12.725 "adrfam": "ipv4", 00:24:12.725 "trsvcid": "4420", 00:24:12.725 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:12.725 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:12.725 "hdgst": false, 00:24:12.725 "ddgst": false 00:24:12.725 }, 00:24:12.725 "method": "bdev_nvme_attach_controller" 00:24:12.725 },{ 00:24:12.725 "params": { 00:24:12.725 "name": "Nvme9", 00:24:12.725 "trtype": "tcp", 00:24:12.725 "traddr": "10.0.0.2", 00:24:12.725 "adrfam": "ipv4", 00:24:12.725 "trsvcid": "4420", 00:24:12.725 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:12.725 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:12.725 "hdgst": false, 00:24:12.725 "ddgst": false 00:24:12.725 }, 00:24:12.725 "method": "bdev_nvme_attach_controller" 00:24:12.725 },{ 00:24:12.725 "params": { 00:24:12.725 "name": "Nvme10", 00:24:12.725 "trtype": "tcp", 00:24:12.725 "traddr": "10.0.0.2", 00:24:12.725 "adrfam": "ipv4", 00:24:12.725 "trsvcid": "4420", 00:24:12.725 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:12.725 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:12.725 "hdgst": false, 00:24:12.725 "ddgst": false 00:24:12.725 }, 00:24:12.725 "method": "bdev_nvme_attach_controller" 00:24:12.725 }' 00:24:12.725 [2024-04-26 08:58:29.779646] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.725 [2024-04-26 08:58:29.845753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.633 Running I/O for 10 seconds... 00:24:14.633 08:58:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:14.633 08:58:31 -- common/autotest_common.sh@850 -- # return 0 00:24:14.633 08:58:31 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:14.633 08:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.633 08:58:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.633 08:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.633 08:58:31 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:14.633 08:58:31 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:14.633 08:58:31 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:14.633 08:58:31 -- target/shutdown.sh@57 -- # local ret=1 00:24:14.633 08:58:31 -- target/shutdown.sh@58 -- # local i 00:24:14.633 08:58:31 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:14.633 08:58:31 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:14.633 08:58:31 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:14.633 08:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.633 08:58:31 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:14.633 08:58:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.633 08:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.633 08:58:31 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:14.633 08:58:31 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:14.633 08:58:31 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:14.633 08:58:31 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:14.633 08:58:31 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:14.633 08:58:31 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:14.633 08:58:31 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:14.633 08:58:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.633 08:58:31 -- common/autotest_common.sh@10 -- # set +x 00:24:14.893 08:58:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.893 08:58:31 -- target/shutdown.sh@60 -- # read_io_count=75 00:24:14.893 08:58:31 -- target/shutdown.sh@63 -- # '[' 75 -ge 100 ']' 00:24:14.893 08:58:31 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:15.154 08:58:32 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:15.154 08:58:32 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:15.154 08:58:32 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:15.154 08:58:32 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:15.154 08:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.154 08:58:32 -- common/autotest_common.sh@10 -- # set +x 00:24:15.154 08:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.154 08:58:32 -- target/shutdown.sh@60 -- # read_io_count=198 00:24:15.154 08:58:32 -- target/shutdown.sh@63 -- # '[' 198 -ge 100 ']' 00:24:15.154 08:58:32 -- target/shutdown.sh@64 -- # ret=0 00:24:15.154 08:58:32 -- target/shutdown.sh@65 -- # break 00:24:15.154 08:58:32 -- target/shutdown.sh@69 -- # return 0 00:24:15.154 08:58:32 -- target/shutdown.sh@110 -- # killprocess 2144493 00:24:15.154 08:58:32 -- common/autotest_common.sh@936 -- # '[' -z 2144493 ']' 00:24:15.154 08:58:32 -- common/autotest_common.sh@940 -- # kill -0 2144493 00:24:15.154 08:58:32 -- common/autotest_common.sh@941 -- # uname 00:24:15.154 08:58:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:15.154 08:58:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2144493 00:24:15.154 08:58:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:15.154 08:58:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:15.154 08:58:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2144493' 00:24:15.154 killing process with pid 2144493 00:24:15.154 08:58:32 -- common/autotest_common.sh@955 -- # kill 2144493 00:24:15.154 08:58:32 -- common/autotest_common.sh@960 -- # wait 2144493 00:24:15.154 Received shutdown signal, test time was about 0.960845 seconds 00:24:15.154 00:24:15.154 Latency(us) 00:24:15.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.154 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.154 Verification LBA range: start 0x0 length 0x400 00:24:15.154 Nvme1n1 : 0.90 287.37 17.96 0.00 0.00 219477.33 5400.17 208876.34 00:24:15.154 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.154 Verification LBA range: start 0x0 length 0x400 00:24:15.154 Nvme2n1 : 0.95 337.91 21.12 0.00 0.00 184507.92 19188.94 204682.04 00:24:15.154 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.154 Verification LBA range: start 0x0 length 0x400 00:24:15.154 Nvme3n1 : 0.96 264.53 16.53 0.00 0.00 231557.37 16777.22 236558.75 00:24:15.154 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.154 Verification LBA range: start 0x0 length 0x400 00:24:15.154 Nvme4n1 : 0.91 211.76 13.23 0.00 0.00 284107.30 19922.94 253335.96 00:24:15.154 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.154 Verification LBA range: start 0x0 length 0x400 00:24:15.154 Nvme5n1 : 0.91 279.96 17.50 0.00 0.00 211084.08 20656.95 221459.25 00:24:15.154 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.154 Verification LBA range: start 0x0 length 0x400 00:24:15.154 Nvme6n1 : 0.93 206.96 12.93 0.00 0.00 281206.78 34812.72 256691.40 00:24:15.154 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.154 Verification LBA range: start 0x0 length 0x400 00:24:15.154 Nvme7n1 : 0.91 289.11 18.07 0.00 0.00 194253.64 4404.02 213070.64 00:24:15.154 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.154 Verification LBA range: start 0x0 length 0x400 00:24:15.154 Nvme8n1 : 0.90 283.47 17.72 0.00 0.00 197089.28 22020.10 198810.01 00:24:15.154 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.154 Verification LBA range: start 0x0 length 0x400 00:24:15.154 Nvme9n1 : 0.93 274.30 17.14 0.00 0.00 201150.67 20027.80 219781.53 00:24:15.154 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:15.154 Verification LBA range: start 0x0 length 0x400 00:24:15.154 Nvme10n1 : 0.93 274.87 17.18 0.00 0.00 196858.06 22544.38 221459.25 00:24:15.154 =================================================================================================================== 00:24:15.154 Total : 2710.23 169.39 0.00 0.00 215940.24 4404.02 256691.40 00:24:15.414 08:58:32 -- target/shutdown.sh@113 -- # sleep 1 00:24:16.363 08:58:33 -- target/shutdown.sh@114 -- # kill -0 2144175 00:24:16.363 08:58:33 -- target/shutdown.sh@116 -- # stoptarget 00:24:16.363 08:58:33 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:16.363 08:58:33 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:16.363 08:58:33 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:16.622 08:58:33 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:16.622 08:58:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:16.622 08:58:33 -- nvmf/common.sh@117 -- # sync 00:24:16.622 08:58:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.622 08:58:33 -- nvmf/common.sh@120 -- # set +e 00:24:16.622 08:58:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.622 08:58:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.622 rmmod nvme_tcp 00:24:16.622 rmmod nvme_fabrics 00:24:16.623 rmmod nvme_keyring 00:24:16.623 08:58:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.623 08:58:33 -- nvmf/common.sh@124 -- # set -e 00:24:16.623 08:58:33 -- nvmf/common.sh@125 -- # return 0 00:24:16.623 08:58:33 -- nvmf/common.sh@478 -- # '[' -n 2144175 ']' 00:24:16.623 08:58:33 -- nvmf/common.sh@479 -- # killprocess 2144175 00:24:16.623 08:58:33 -- common/autotest_common.sh@936 -- # '[' -z 2144175 ']' 00:24:16.623 08:58:33 -- common/autotest_common.sh@940 -- # kill -0 2144175 00:24:16.623 08:58:33 -- common/autotest_common.sh@941 -- # uname 00:24:16.623 08:58:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:16.623 08:58:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2144175 00:24:16.623 08:58:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:16.623 08:58:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:16.623 08:58:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2144175' 00:24:16.623 killing process with pid 2144175 00:24:16.623 08:58:33 -- common/autotest_common.sh@955 -- # kill 2144175 00:24:16.623 08:58:33 -- common/autotest_common.sh@960 -- # wait 2144175 00:24:17.192 08:58:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:17.192 08:58:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:17.192 08:58:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:17.192 08:58:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.192 08:58:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.192 08:58:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.192 08:58:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.192 08:58:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.101 08:58:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.101 00:24:19.101 real 0m8.409s 00:24:19.101 user 0m25.368s 00:24:19.101 sys 0m1.797s 00:24:19.101 08:58:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:19.101 08:58:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.101 ************************************ 00:24:19.101 END TEST nvmf_shutdown_tc2 00:24:19.101 ************************************ 00:24:19.101 08:58:36 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:19.101 08:58:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:19.101 08:58:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:19.101 08:58:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.361 ************************************ 00:24:19.361 START TEST nvmf_shutdown_tc3 00:24:19.361 ************************************ 00:24:19.361 08:58:36 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:24:19.361 08:58:36 -- target/shutdown.sh@121 -- # starttarget 00:24:19.361 08:58:36 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:19.361 08:58:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:19.361 08:58:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.361 08:58:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:19.361 08:58:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:19.361 08:58:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:19.361 08:58:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.361 08:58:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.361 08:58:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.361 08:58:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:19.361 08:58:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:19.361 08:58:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.361 08:58:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.361 08:58:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:19.361 08:58:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.361 08:58:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.361 08:58:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.361 08:58:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.361 08:58:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.361 08:58:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.361 08:58:36 -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.361 08:58:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.361 08:58:36 -- nvmf/common.sh@296 -- # e810=() 00:24:19.361 08:58:36 -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.361 08:58:36 -- nvmf/common.sh@297 -- # x722=() 00:24:19.361 08:58:36 -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.361 08:58:36 -- nvmf/common.sh@298 -- # mlx=() 00:24:19.361 08:58:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.361 08:58:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.361 08:58:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.361 08:58:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.361 08:58:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.361 08:58:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.361 08:58:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.361 08:58:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.361 08:58:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.361 08:58:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.361 08:58:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.361 08:58:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.361 08:58:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.361 08:58:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.361 08:58:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.361 08:58:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.361 08:58:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.361 08:58:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.361 08:58:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.361 08:58:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:19.361 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:19.361 08:58:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.361 08:58:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.361 08:58:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.361 08:58:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.361 08:58:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.361 08:58:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.362 08:58:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:19.362 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:19.362 08:58:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.362 08:58:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.362 08:58:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.362 08:58:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.362 08:58:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.362 08:58:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.362 08:58:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.362 08:58:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.362 08:58:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.362 08:58:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.362 08:58:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:19.362 08:58:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.362 08:58:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:19.362 Found net devices under 0000:af:00.0: cvl_0_0 00:24:19.362 08:58:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.362 08:58:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.362 08:58:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.362 08:58:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:19.362 08:58:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.362 08:58:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:19.362 Found net devices under 0000:af:00.1: cvl_0_1 00:24:19.362 08:58:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.362 08:58:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:19.362 08:58:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:19.362 08:58:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:19.362 08:58:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:19.362 08:58:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:19.362 08:58:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.362 08:58:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.362 08:58:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.362 08:58:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.362 08:58:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.362 08:58:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.362 08:58:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.362 08:58:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.362 08:58:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.362 08:58:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.362 08:58:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.362 08:58:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.362 08:58:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.622 08:58:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.622 08:58:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.622 08:58:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.622 08:58:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.622 08:58:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.622 08:58:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.622 08:58:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:24:19.622 00:24:19.622 --- 10.0.0.2 ping statistics --- 00:24:19.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.622 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:24:19.622 08:58:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:24:19.622 00:24:19.622 --- 10.0.0.1 ping statistics --- 00:24:19.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.622 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:24:19.622 08:58:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.622 08:58:36 -- nvmf/common.sh@411 -- # return 0 00:24:19.622 08:58:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:19.622 08:58:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.622 08:58:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:19.622 08:58:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:19.622 08:58:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.622 08:58:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:19.622 08:58:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:19.622 08:58:36 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:19.622 08:58:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:19.622 08:58:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:19.622 08:58:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.622 08:58:36 -- nvmf/common.sh@470 -- # nvmfpid=2145943 00:24:19.622 08:58:36 -- nvmf/common.sh@471 -- # waitforlisten 2145943 00:24:19.622 08:58:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:19.622 08:58:36 -- common/autotest_common.sh@817 -- # '[' -z 2145943 ']' 00:24:19.622 08:58:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.622 08:58:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:19.622 08:58:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.622 08:58:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:19.622 08:58:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.882 [2024-04-26 08:58:36.905807] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:24:19.882 [2024-04-26 08:58:36.905853] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.882 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.882 [2024-04-26 08:58:36.994605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.882 [2024-04-26 08:58:37.067797] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.882 [2024-04-26 08:58:37.067836] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.882 [2024-04-26 08:58:37.067846] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.882 [2024-04-26 08:58:37.067854] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.882 [2024-04-26 08:58:37.067862] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.882 [2024-04-26 08:58:37.067971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.882 [2024-04-26 08:58:37.068058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.882 [2024-04-26 08:58:37.068168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.882 [2024-04-26 08:58:37.068169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:20.451 08:58:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:20.451 08:58:37 -- common/autotest_common.sh@850 -- # return 0 00:24:20.451 08:58:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:20.711 08:58:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:20.711 08:58:37 -- common/autotest_common.sh@10 -- # set +x 00:24:20.711 08:58:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.711 08:58:37 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:20.711 08:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.711 08:58:37 -- common/autotest_common.sh@10 -- # set +x 00:24:20.711 [2024-04-26 08:58:37.748120] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.711 08:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.711 08:58:37 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:20.711 08:58:37 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:20.711 08:58:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:20.711 08:58:37 -- common/autotest_common.sh@10 -- # set +x 00:24:20.711 08:58:37 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:20.711 08:58:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.711 08:58:37 -- target/shutdown.sh@28 -- # cat 00:24:20.711 08:58:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.711 08:58:37 -- target/shutdown.sh@28 -- # cat 00:24:20.711 08:58:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.711 08:58:37 -- target/shutdown.sh@28 -- # cat 00:24:20.711 08:58:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.711 08:58:37 -- target/shutdown.sh@28 -- # cat 00:24:20.711 08:58:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.711 08:58:37 -- target/shutdown.sh@28 -- # cat 00:24:20.711 08:58:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.711 08:58:37 -- target/shutdown.sh@28 -- # cat 00:24:20.711 08:58:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.711 08:58:37 -- target/shutdown.sh@28 -- # cat 00:24:20.711 08:58:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.711 08:58:37 -- target/shutdown.sh@28 -- # cat 00:24:20.711 08:58:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.711 08:58:37 -- target/shutdown.sh@28 -- # cat 00:24:20.711 08:58:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:20.711 08:58:37 -- target/shutdown.sh@28 -- # cat 00:24:20.711 08:58:37 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:20.711 08:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.711 08:58:37 -- common/autotest_common.sh@10 -- # set +x 00:24:20.711 Malloc1 00:24:20.711 [2024-04-26 08:58:37.859063] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.711 Malloc2 00:24:20.711 Malloc3 00:24:20.971 Malloc4 00:24:20.971 Malloc5 00:24:20.971 Malloc6 00:24:20.971 Malloc7 00:24:20.971 Malloc8 00:24:20.971 Malloc9 00:24:21.232 Malloc10 00:24:21.232 08:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.232 08:58:38 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:21.232 08:58:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:21.232 08:58:38 -- common/autotest_common.sh@10 -- # set +x 00:24:21.232 08:58:38 -- target/shutdown.sh@125 -- # perfpid=2146216 00:24:21.232 08:58:38 -- target/shutdown.sh@126 -- # waitforlisten 2146216 /var/tmp/bdevperf.sock 00:24:21.232 08:58:38 -- common/autotest_common.sh@817 -- # '[' -z 2146216 ']' 00:24:21.232 08:58:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.232 08:58:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:21.232 08:58:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.232 08:58:38 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:21.232 08:58:38 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:21.232 08:58:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:21.232 08:58:38 -- common/autotest_common.sh@10 -- # set +x 00:24:21.232 08:58:38 -- nvmf/common.sh@521 -- # config=() 00:24:21.232 08:58:38 -- nvmf/common.sh@521 -- # local subsystem config 00:24:21.232 08:58:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:21.232 { 00:24:21.232 "params": { 00:24:21.232 "name": "Nvme$subsystem", 00:24:21.232 "trtype": "$TEST_TRANSPORT", 00:24:21.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.232 "adrfam": "ipv4", 00:24:21.232 "trsvcid": "$NVMF_PORT", 00:24:21.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.232 "hdgst": ${hdgst:-false}, 00:24:21.232 "ddgst": ${ddgst:-false} 00:24:21.232 }, 00:24:21.232 "method": "bdev_nvme_attach_controller" 00:24:21.232 } 00:24:21.232 EOF 00:24:21.232 )") 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # cat 00:24:21.232 08:58:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:21.232 { 00:24:21.232 "params": { 00:24:21.232 "name": "Nvme$subsystem", 00:24:21.232 "trtype": "$TEST_TRANSPORT", 00:24:21.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.232 "adrfam": "ipv4", 00:24:21.232 "trsvcid": "$NVMF_PORT", 00:24:21.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.232 "hdgst": ${hdgst:-false}, 00:24:21.232 "ddgst": ${ddgst:-false} 00:24:21.232 }, 00:24:21.232 "method": "bdev_nvme_attach_controller" 00:24:21.232 } 00:24:21.232 EOF 00:24:21.232 )") 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # cat 00:24:21.232 08:58:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:21.232 { 00:24:21.232 "params": { 00:24:21.232 "name": "Nvme$subsystem", 00:24:21.232 "trtype": "$TEST_TRANSPORT", 00:24:21.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.232 "adrfam": "ipv4", 00:24:21.232 "trsvcid": "$NVMF_PORT", 00:24:21.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.232 "hdgst": ${hdgst:-false}, 00:24:21.232 "ddgst": ${ddgst:-false} 00:24:21.232 }, 00:24:21.232 "method": "bdev_nvme_attach_controller" 00:24:21.232 } 00:24:21.232 EOF 00:24:21.232 )") 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # cat 00:24:21.232 08:58:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:21.232 { 00:24:21.232 "params": { 00:24:21.232 "name": "Nvme$subsystem", 00:24:21.232 "trtype": "$TEST_TRANSPORT", 00:24:21.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.232 "adrfam": "ipv4", 00:24:21.232 "trsvcid": "$NVMF_PORT", 00:24:21.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.232 "hdgst": ${hdgst:-false}, 00:24:21.232 "ddgst": ${ddgst:-false} 00:24:21.232 }, 00:24:21.232 "method": "bdev_nvme_attach_controller" 00:24:21.232 } 00:24:21.232 EOF 00:24:21.232 )") 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # cat 00:24:21.232 08:58:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:21.232 { 00:24:21.232 "params": { 00:24:21.232 "name": "Nvme$subsystem", 00:24:21.232 "trtype": "$TEST_TRANSPORT", 00:24:21.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.232 "adrfam": "ipv4", 00:24:21.232 "trsvcid": "$NVMF_PORT", 00:24:21.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.232 "hdgst": ${hdgst:-false}, 00:24:21.232 "ddgst": ${ddgst:-false} 00:24:21.232 }, 00:24:21.232 "method": "bdev_nvme_attach_controller" 00:24:21.232 } 00:24:21.232 EOF 00:24:21.232 )") 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # cat 00:24:21.232 08:58:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:21.232 { 00:24:21.232 "params": { 00:24:21.232 "name": "Nvme$subsystem", 00:24:21.232 "trtype": "$TEST_TRANSPORT", 00:24:21.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.232 "adrfam": "ipv4", 00:24:21.232 "trsvcid": "$NVMF_PORT", 00:24:21.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.232 "hdgst": ${hdgst:-false}, 00:24:21.232 "ddgst": ${ddgst:-false} 00:24:21.232 }, 00:24:21.232 "method": "bdev_nvme_attach_controller" 00:24:21.232 } 00:24:21.232 EOF 00:24:21.232 )") 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # cat 00:24:21.232 [2024-04-26 08:58:38.346456] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:24:21.232 [2024-04-26 08:58:38.346509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146216 ] 00:24:21.232 08:58:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:21.232 { 00:24:21.232 "params": { 00:24:21.232 "name": "Nvme$subsystem", 00:24:21.232 "trtype": "$TEST_TRANSPORT", 00:24:21.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.232 "adrfam": "ipv4", 00:24:21.232 "trsvcid": "$NVMF_PORT", 00:24:21.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.232 "hdgst": ${hdgst:-false}, 00:24:21.232 "ddgst": ${ddgst:-false} 00:24:21.232 }, 00:24:21.232 "method": "bdev_nvme_attach_controller" 00:24:21.232 } 00:24:21.232 EOF 00:24:21.232 )") 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # cat 00:24:21.232 08:58:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:21.232 { 00:24:21.232 "params": { 00:24:21.232 "name": "Nvme$subsystem", 00:24:21.232 "trtype": "$TEST_TRANSPORT", 00:24:21.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.232 "adrfam": "ipv4", 00:24:21.232 "trsvcid": "$NVMF_PORT", 00:24:21.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.232 "hdgst": ${hdgst:-false}, 00:24:21.232 "ddgst": ${ddgst:-false} 00:24:21.232 }, 00:24:21.232 "method": "bdev_nvme_attach_controller" 00:24:21.232 } 00:24:21.232 EOF 00:24:21.232 )") 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # cat 00:24:21.232 08:58:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:21.232 { 00:24:21.232 "params": { 00:24:21.232 "name": "Nvme$subsystem", 00:24:21.232 "trtype": "$TEST_TRANSPORT", 00:24:21.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.232 "adrfam": "ipv4", 00:24:21.232 "trsvcid": "$NVMF_PORT", 00:24:21.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.232 "hdgst": ${hdgst:-false}, 00:24:21.232 "ddgst": ${ddgst:-false} 00:24:21.232 }, 00:24:21.232 "method": "bdev_nvme_attach_controller" 00:24:21.232 } 00:24:21.232 EOF 00:24:21.232 )") 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # cat 00:24:21.232 08:58:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:21.232 08:58:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:21.232 { 00:24:21.232 "params": { 00:24:21.232 "name": "Nvme$subsystem", 00:24:21.232 "trtype": "$TEST_TRANSPORT", 00:24:21.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.232 "adrfam": "ipv4", 00:24:21.232 "trsvcid": "$NVMF_PORT", 00:24:21.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.232 "hdgst": ${hdgst:-false}, 00:24:21.232 "ddgst": ${ddgst:-false} 00:24:21.232 }, 00:24:21.232 "method": "bdev_nvme_attach_controller" 00:24:21.232 } 00:24:21.232 EOF 00:24:21.232 )") 00:24:21.233 08:58:38 -- nvmf/common.sh@543 -- # cat 00:24:21.233 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.233 08:58:38 -- nvmf/common.sh@545 -- # jq . 00:24:21.233 08:58:38 -- nvmf/common.sh@546 -- # IFS=, 00:24:21.233 08:58:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:21.233 "params": { 00:24:21.233 "name": "Nvme1", 00:24:21.233 "trtype": "tcp", 00:24:21.233 "traddr": "10.0.0.2", 00:24:21.233 "adrfam": "ipv4", 00:24:21.233 "trsvcid": "4420", 00:24:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.233 "hdgst": false, 00:24:21.233 "ddgst": false 00:24:21.233 }, 00:24:21.233 "method": "bdev_nvme_attach_controller" 00:24:21.233 },{ 00:24:21.233 "params": { 00:24:21.233 "name": "Nvme2", 00:24:21.233 "trtype": "tcp", 00:24:21.233 "traddr": "10.0.0.2", 00:24:21.233 "adrfam": "ipv4", 00:24:21.233 "trsvcid": "4420", 00:24:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:21.233 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:21.233 "hdgst": false, 00:24:21.233 "ddgst": false 00:24:21.233 }, 00:24:21.233 "method": "bdev_nvme_attach_controller" 00:24:21.233 },{ 00:24:21.233 "params": { 00:24:21.233 "name": "Nvme3", 00:24:21.233 "trtype": "tcp", 00:24:21.233 "traddr": "10.0.0.2", 00:24:21.233 "adrfam": "ipv4", 00:24:21.233 "trsvcid": "4420", 00:24:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:21.233 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:21.233 "hdgst": false, 00:24:21.233 "ddgst": false 00:24:21.233 }, 00:24:21.233 "method": "bdev_nvme_attach_controller" 00:24:21.233 },{ 00:24:21.233 "params": { 00:24:21.233 "name": "Nvme4", 00:24:21.233 "trtype": "tcp", 00:24:21.233 "traddr": "10.0.0.2", 00:24:21.233 "adrfam": "ipv4", 00:24:21.233 "trsvcid": "4420", 00:24:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:21.233 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:21.233 "hdgst": false, 00:24:21.233 "ddgst": false 00:24:21.233 }, 00:24:21.233 "method": "bdev_nvme_attach_controller" 00:24:21.233 },{ 00:24:21.233 "params": { 00:24:21.233 "name": "Nvme5", 00:24:21.233 "trtype": "tcp", 00:24:21.233 "traddr": "10.0.0.2", 00:24:21.233 "adrfam": "ipv4", 00:24:21.233 "trsvcid": "4420", 00:24:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:21.233 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:21.233 "hdgst": false, 00:24:21.233 "ddgst": false 00:24:21.233 }, 00:24:21.233 "method": "bdev_nvme_attach_controller" 00:24:21.233 },{ 00:24:21.233 "params": { 00:24:21.233 "name": "Nvme6", 00:24:21.233 "trtype": "tcp", 00:24:21.233 "traddr": "10.0.0.2", 00:24:21.233 "adrfam": "ipv4", 00:24:21.233 "trsvcid": "4420", 00:24:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:21.233 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:21.233 "hdgst": false, 00:24:21.233 "ddgst": false 00:24:21.233 }, 00:24:21.233 "method": "bdev_nvme_attach_controller" 00:24:21.233 },{ 00:24:21.233 "params": { 00:24:21.233 "name": "Nvme7", 00:24:21.233 "trtype": "tcp", 00:24:21.233 "traddr": "10.0.0.2", 00:24:21.233 "adrfam": "ipv4", 00:24:21.233 "trsvcid": "4420", 00:24:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:21.233 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:21.233 "hdgst": false, 00:24:21.233 "ddgst": false 00:24:21.233 }, 00:24:21.233 "method": "bdev_nvme_attach_controller" 00:24:21.233 },{ 00:24:21.233 "params": { 00:24:21.233 "name": "Nvme8", 00:24:21.233 "trtype": "tcp", 00:24:21.233 "traddr": "10.0.0.2", 00:24:21.233 "adrfam": "ipv4", 00:24:21.233 "trsvcid": "4420", 00:24:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:21.233 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:21.233 "hdgst": false, 00:24:21.233 "ddgst": false 00:24:21.233 }, 00:24:21.233 "method": "bdev_nvme_attach_controller" 00:24:21.233 },{ 00:24:21.233 "params": { 00:24:21.233 "name": "Nvme9", 00:24:21.233 "trtype": "tcp", 00:24:21.233 "traddr": "10.0.0.2", 00:24:21.233 "adrfam": "ipv4", 00:24:21.233 "trsvcid": "4420", 00:24:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:21.233 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:21.233 "hdgst": false, 00:24:21.233 "ddgst": false 00:24:21.233 }, 00:24:21.233 "method": "bdev_nvme_attach_controller" 00:24:21.233 },{ 00:24:21.233 "params": { 00:24:21.233 "name": "Nvme10", 00:24:21.233 "trtype": "tcp", 00:24:21.233 "traddr": "10.0.0.2", 00:24:21.233 "adrfam": "ipv4", 00:24:21.233 "trsvcid": "4420", 00:24:21.233 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:21.233 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:21.233 "hdgst": false, 00:24:21.233 "ddgst": false 00:24:21.233 }, 00:24:21.233 "method": "bdev_nvme_attach_controller" 00:24:21.233 }' 00:24:21.233 [2024-04-26 08:58:38.419165] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.524 [2024-04-26 08:58:38.486239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.439 Running I/O for 10 seconds... 00:24:23.699 08:58:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:23.699 08:58:40 -- common/autotest_common.sh@850 -- # return 0 00:24:23.699 08:58:40 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:23.699 08:58:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.699 08:58:40 -- common/autotest_common.sh@10 -- # set +x 00:24:23.699 08:58:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.699 08:58:40 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:23.699 08:58:40 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:23.699 08:58:40 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:23.699 08:58:40 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:23.699 08:58:40 -- target/shutdown.sh@57 -- # local ret=1 00:24:23.699 08:58:40 -- target/shutdown.sh@58 -- # local i 00:24:23.699 08:58:40 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:23.699 08:58:40 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:23.699 08:58:40 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:23.699 08:58:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.699 08:58:40 -- common/autotest_common.sh@10 -- # set +x 00:24:23.699 08:58:40 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:23.975 08:58:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.975 08:58:40 -- target/shutdown.sh@60 -- # read_io_count=195 00:24:23.975 08:58:40 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:24:23.975 08:58:40 -- target/shutdown.sh@64 -- # ret=0 00:24:23.975 08:58:40 -- target/shutdown.sh@65 -- # break 00:24:23.975 08:58:40 -- target/shutdown.sh@69 -- # return 0 00:24:23.975 08:58:40 -- target/shutdown.sh@135 -- # killprocess 2145943 00:24:23.975 08:58:40 -- common/autotest_common.sh@936 -- # '[' -z 2145943 ']' 00:24:23.975 08:58:40 -- common/autotest_common.sh@940 -- # kill -0 2145943 00:24:23.975 08:58:40 -- common/autotest_common.sh@941 -- # uname 00:24:23.975 08:58:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:23.975 08:58:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2145943 00:24:23.975 08:58:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:23.975 08:58:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:23.975 08:58:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2145943' 00:24:23.975 killing process with pid 2145943 00:24:23.975 08:58:41 -- common/autotest_common.sh@955 -- # kill 2145943 00:24:23.975 08:58:41 -- common/autotest_common.sh@960 -- # wait 2145943 00:24:23.975 [2024-04-26 08:58:41.048216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.975 [2024-04-26 08:58:41.048286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.975 [2024-04-26 08:58:41.048297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.975 [2024-04-26 08:58:41.048306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.975 [2024-04-26 08:58:41.048315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.975 [2024-04-26 08:58:41.048329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.975 [2024-04-26 08:58:41.048338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.975 [2024-04-26 08:58:41.048347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.975 [2024-04-26 08:58:41.048355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048417] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048535] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048664] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.048846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24627a0 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.049994] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.050003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.050011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.050023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.050033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.050042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.050051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.050060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.050068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.976 [2024-04-26 08:58:41.050077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.050364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2474a50 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.051191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2462c30 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052429] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052460] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.977 [2024-04-26 08:58:41.052631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052726] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.052744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24630c0 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.053995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2463550 is same with the state(5) to be set 00:24:23.978 [2024-04-26 08:58:41.054226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.978 [2024-04-26 08:58:41.054259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.978 [2024-04-26 08:58:41.054271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.978 [2024-04-26 08:58:41.054281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.978 [2024-04-26 08:58:41.054291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.978 [2024-04-26 08:58:41.054300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0f040 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30fc0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ee930 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04760 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.979 [2024-04-26 08:58:41.054789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.979 [2024-04-26 08:58:41.054799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb310a0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054838] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.054997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055014] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055042] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.979 [2024-04-26 08:58:41.055150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with [2024-04-26 08:58:41.055238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1the state(5) to be set 00:24:23.980 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-26 08:58:41.055294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24639e0 is same with the state(5) to be set 00:24:23.980 [2024-04-26 08:58:41.055337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.980 [2024-04-26 08:58:41.055708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.980 [2024-04-26 08:58:41.055717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.055987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.055996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.981 [2024-04-26 08:58:41.056511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.981 [2024-04-26 08:58:41.056520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.982 [2024-04-26 08:58:41.056530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.982 [2024-04-26 08:58:41.056540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.982 [2024-04-26 08:58:41.056550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.982 [2024-04-26 08:58:41.056560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.982 [2024-04-26 08:58:41.056586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:23.982 [2024-04-26 08:58:41.056643] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb14c60 was disconnected and freed. reset controller. 00:24:23.982 [2024-04-26 08:58:41.056825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056953] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.056991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.057380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2213440 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.058694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22138d0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.058716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22138d0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.982 [2024-04-26 08:58:41.059131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24745c0 is same with the state(5) to be set 00:24:23.983 [2024-04-26 08:58:41.059709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.059984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.059995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.060004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.060014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.060023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.060033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.060043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.060054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.060063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.060073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.060082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.060093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.060102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.060114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.060123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.060135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.060145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.060155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.060165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.060175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.060183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.983 [2024-04-26 08:58:41.060194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.983 [2024-04-26 08:58:41.060203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.060758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.060767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.076988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.077017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.077032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.077045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.077060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.077072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.077088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.077100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.984 [2024-04-26 08:58:41.077115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.984 [2024-04-26 08:58:41.077129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077417] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9f9d20 was disconnected and freed. reset controller. 00:24:23.985 [2024-04-26 08:58:41.077594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.077980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.077995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.985 [2024-04-26 08:58:41.078479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.985 [2024-04-26 08:58:41.078494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.078982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.078995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079427] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9fc680 was disconnected and freed. reset controller. 00:24:23.986 [2024-04-26 08:58:41.079914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.079981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.079994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.080012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.080025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.080040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.080053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.080068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.080081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.080096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.986 [2024-04-26 08:58:41.080109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.986 [2024-04-26 08:58:41.080123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.080983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.080998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.081010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.081025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.081040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.081054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.081066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.081080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.081093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.081107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.081120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.081134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.081147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.081162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.081174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.987 [2024-04-26 08:58:41.081188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.987 [2024-04-26 08:58:41.081202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.988 [2024-04-26 08:58:41.081674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.081705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:23.988 [2024-04-26 08:58:41.081765] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9fd930 was disconnected and freed. reset controller. 00:24:23.988 [2024-04-26 08:58:41.081852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:23.988 [2024-04-26 08:58:41.081901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0f040 (9): Bad file descriptor 00:24:23.988 [2024-04-26 08:58:41.081948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa30fc0 (9): Bad file descriptor 00:24:23.988 [2024-04-26 08:58:41.081991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb3c30 is same with the state(5) to be set 00:24:23.988 [2024-04-26 08:58:41.082136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbbd10 is same with the state(5) to be set 00:24:23.988 [2024-04-26 08:58:41.082274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb3290 is same with the state(5) to be set 00:24:23.988 [2024-04-26 08:58:41.082403] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ee930 (9): Bad file descriptor 00:24:23.988 [2024-04-26 08:58:41.082436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa04760 (9): Bad file descriptor 00:24:23.988 [2024-04-26 08:58:41.082482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9f8f0 is same with the state(5) to be set 00:24:23.988 [2024-04-26 08:58:41.082610] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb310a0 (9): Bad file descriptor 00:24:23.988 [2024-04-26 08:58:41.082647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.988 [2024-04-26 08:58:41.082662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.988 [2024-04-26 08:58:41.082675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.989 [2024-04-26 08:58:41.082688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.082702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.989 [2024-04-26 08:58:41.082716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.082730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.989 [2024-04-26 08:58:41.082743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.082755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2c660 is same with the state(5) to be set 00:24:23.989 [2024-04-26 08:58:41.086674] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:23.989 [2024-04-26 08:58:41.086716] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:23.989 [2024-04-26 08:58:41.086742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2c660 (9): Bad file descriptor 00:24:23.989 [2024-04-26 08:58:41.086762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbbd10 (9): Bad file descriptor 00:24:23.989 [2024-04-26 08:58:41.087428] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:23.989 [2024-04-26 08:58:41.087470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:23.989 [2024-04-26 08:58:41.087496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9f8f0 (9): Bad file descriptor 00:24:23.989 [2024-04-26 08:58:41.087918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.989 [2024-04-26 08:58:41.088423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.989 [2024-04-26 08:58:41.088441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0f040 with addr=10.0.0.2, port=4420 00:24:23.989 [2024-04-26 08:58:41.088463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0f040 is same with the state(5) to be set 00:24:23.989 [2024-04-26 08:58:41.088545] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:23.989 [2024-04-26 08:58:41.088617] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:23.989 [2024-04-26 08:58:41.088673] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:23.989 [2024-04-26 08:58:41.088723] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:23.989 [2024-04-26 08:58:41.090043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.989 [2024-04-26 08:58:41.090264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.989 [2024-04-26 08:58:41.090287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbbd10 with addr=10.0.0.2, port=4420 00:24:23.989 [2024-04-26 08:58:41.090302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbbd10 is same with the state(5) to be set 00:24:23.989 [2024-04-26 08:58:41.090734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.989 [2024-04-26 08:58:41.091105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.989 [2024-04-26 08:58:41.091122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c660 with addr=10.0.0.2, port=4420 00:24:23.989 [2024-04-26 08:58:41.091136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2c660 is same with the state(5) to be set 00:24:23.989 [2024-04-26 08:58:41.091167] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0f040 (9): Bad file descriptor 00:24:23.989 [2024-04-26 08:58:41.091294] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:23.989 [2024-04-26 08:58:41.091682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.989 [2024-04-26 08:58:41.092096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.989 [2024-04-26 08:58:41.092113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9f8f0 with addr=10.0.0.2, port=4420 00:24:23.989 [2024-04-26 08:58:41.092128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9f8f0 is same with the state(5) to be set 00:24:23.989 [2024-04-26 08:58:41.092144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbbd10 (9): Bad file descriptor 00:24:23.989 [2024-04-26 08:58:41.092161] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2c660 (9): Bad file descriptor 00:24:23.989 [2024-04-26 08:58:41.092176] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:23.989 [2024-04-26 08:58:41.092190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:23.989 [2024-04-26 08:58:41.092205] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:23.989 [2024-04-26 08:58:41.092316] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.989 [2024-04-26 08:58:41.092334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9f8f0 (9): Bad file descriptor 00:24:23.989 [2024-04-26 08:58:41.092358] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:23.989 [2024-04-26 08:58:41.092369] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:23.989 [2024-04-26 08:58:41.092381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:23.989 [2024-04-26 08:58:41.092397] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:23.989 [2024-04-26 08:58:41.092409] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:23.989 [2024-04-26 08:58:41.092420] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:23.989 [2024-04-26 08:58:41.092445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb3c30 (9): Bad file descriptor 00:24:23.989 [2024-04-26 08:58:41.092475] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb3290 (9): Bad file descriptor 00:24:23.989 [2024-04-26 08:58:41.092554] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.989 [2024-04-26 08:58:41.092567] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.989 [2024-04-26 08:58:41.092591] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:23.989 [2024-04-26 08:58:41.092603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:23.989 [2024-04-26 08:58:41.092614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:23.989 [2024-04-26 08:58:41.092675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.989 [2024-04-26 08:58:41.092691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.092710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.989 [2024-04-26 08:58:41.092722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.092736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.989 [2024-04-26 08:58:41.092748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.092762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.989 [2024-04-26 08:58:41.092774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.092788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.989 [2024-04-26 08:58:41.092800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.092813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.989 [2024-04-26 08:58:41.092825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.092839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.989 [2024-04-26 08:58:41.092855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.092869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.989 [2024-04-26 08:58:41.092881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.092895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.989 [2024-04-26 08:58:41.092908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.989 [2024-04-26 08:58:41.092921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.989 [2024-04-26 08:58:41.092933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.092947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.092959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.092973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.092985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.092999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.990 [2024-04-26 08:58:41.093954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.990 [2024-04-26 08:58:41.093967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.093980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.093994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.094346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.094358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb13960 is same with the state(5) to be set 00:24:23.991 [2024-04-26 08:58:41.095570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.095985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.095999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.096011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.096027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.096039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.096052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.096064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.096077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.096090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.096104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.096115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.096129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.096141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.096154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.096167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.096180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.991 [2024-04-26 08:58:41.096192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.991 [2024-04-26 08:58:41.096206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.096977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.096990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.097002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.097017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.097028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.097042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.097053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.097067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.097078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.097092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.097103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.097118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.097130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.097144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.097156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.097170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.097181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.097195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.097206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.992 [2024-04-26 08:58:41.097220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.992 [2024-04-26 08:58:41.097232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.097247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae7e0 is same with the state(5) to be set 00:24:23.993 [2024-04-26 08:58:41.098444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.098986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.098999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.099025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.099050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.099077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.099102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.099128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.099153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.099181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.099207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.099233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.099258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.993 [2024-04-26 08:58:41.099283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.993 [2024-04-26 08:58:41.099295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.099979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.099991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.100004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.100016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.100029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.100041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.100055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.100066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.100079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.100091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.100104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaafc50 is same with the state(5) to be set 00:24:23.994 [2024-04-26 08:58:41.101303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.101322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.101342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.101354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.101369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.101381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.101395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.101407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.101420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.101432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.101446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.101463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.101476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.101488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.994 [2024-04-26 08:58:41.101502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.994 [2024-04-26 08:58:41.101515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.101984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.101999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.995 [2024-04-26 08:58:41.102540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.995 [2024-04-26 08:58:41.102552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.102905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.102915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0bfc0 is same with the state(5) to be set 00:24:23.996 [2024-04-26 08:58:41.104182] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:23.996 [2024-04-26 08:58:41.104201] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.996 [2024-04-26 08:58:41.104216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:23.996 [2024-04-26 08:58:41.104228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:23.996 [2024-04-26 08:58:41.104309] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:23.996 [2024-04-26 08:58:41.104368] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:23.996 [2024-04-26 08:58:41.104827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.996 [2024-04-26 08:58:41.105354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.996 [2024-04-26 08:58:41.105370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ee930 with addr=10.0.0.2, port=4420 00:24:23.996 [2024-04-26 08:58:41.105382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ee930 is same with the state(5) to be set 00:24:23.996 [2024-04-26 08:58:41.105797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.996 [2024-04-26 08:58:41.106292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.996 [2024-04-26 08:58:41.106306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa04760 with addr=10.0.0.2, port=4420 00:24:23.996 [2024-04-26 08:58:41.106317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa04760 is same with the state(5) to be set 00:24:23.996 [2024-04-26 08:58:41.106681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.996 [2024-04-26 08:58:41.107104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:23.996 [2024-04-26 08:58:41.107119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa30fc0 with addr=10.0.0.2, port=4420 00:24:23.996 [2024-04-26 08:58:41.107129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30fc0 is same with the state(5) to be set 00:24:23.996 [2024-04-26 08:58:41.107819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.107835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.107850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.107860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.107872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.107883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.107894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.107904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.107915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.107929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.107940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.107951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.107963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.107973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.107984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.107994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.108005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.108015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.108026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.108036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.108047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.108057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.108068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.108078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.108089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.108099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.108110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.108120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.108131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.108141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.996 [2024-04-26 08:58:41.108153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.996 [2024-04-26 08:58:41.108163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.108982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.108991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.997 [2024-04-26 08:58:41.109004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.997 [2024-04-26 08:58:41.109013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.109024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.109034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.109048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.109057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.109069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.109078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.109090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.109099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.109110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.109120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.109131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.109140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.109151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.109161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.109172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.109181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.109192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab1100 is same with the state(5) to be set 00:24:23.998 [2024-04-26 08:58:41.110168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.998 [2024-04-26 08:58:41.110633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.998 [2024-04-26 08:58:41.110642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.110982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.110993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.999 [2024-04-26 08:58:41.111476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.999 [2024-04-26 08:58:41.111486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.000 [2024-04-26 08:58:41.111497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.000 [2024-04-26 08:58:41.111508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.000 [2024-04-26 08:58:41.111520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.000 [2024-04-26 08:58:41.111530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.000 [2024-04-26 08:58:41.111539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9fb1d0 is same with the state(5) to be set 00:24:24.000 [2024-04-26 08:58:41.113344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:24.000 [2024-04-26 08:58:41.113366] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:24.000 [2024-04-26 08:58:41.113378] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:24.000 [2024-04-26 08:58:41.113391] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:24.000 [2024-04-26 08:58:41.113403] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:24.000 task offset: 27264 on job bdev=Nvme2n1 fails 00:24:24.000 00:24:24.000 Latency(us) 00:24:24.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.000 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.000 Job: Nvme1n1 ended in about 0.87 seconds with error 00:24:24.000 Verification LBA range: start 0x0 length 0x400 00:24:24.000 Nvme1n1 : 0.87 220.36 13.77 73.45 0.00 215733.66 19188.94 211392.92 00:24:24.000 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.000 Job: Nvme2n1 ended in about 0.84 seconds with error 00:24:24.000 Verification LBA range: start 0x0 length 0x400 00:24:24.000 Nvme2n1 : 0.84 229.78 14.36 76.59 0.00 203080.24 3696.23 239914.19 00:24:24.000 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.000 Job: Nvme3n1 ended in about 0.87 seconds with error 00:24:24.000 Verification LBA range: start 0x0 length 0x400 00:24:24.000 Nvme3n1 : 0.87 219.64 13.73 73.21 0.00 208969.52 24536.68 224814.69 00:24:24.000 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.000 Job: Nvme4n1 ended in about 0.88 seconds with error 00:24:24.000 Verification LBA range: start 0x0 length 0x400 00:24:24.000 Nvme4n1 : 0.88 218.92 13.68 72.97 0.00 205977.19 20656.95 183710.52 00:24:24.000 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.000 Job: Nvme5n1 ended in about 0.89 seconds with error 00:24:24.000 Verification LBA range: start 0x0 length 0x400 00:24:24.000 Nvme5n1 : 0.89 144.46 9.03 72.23 0.00 272781.59 20656.95 241591.91 00:24:24.000 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.000 Job: Nvme6n1 ended in about 0.86 seconds with error 00:24:24.000 Verification LBA range: start 0x0 length 0x400 00:24:24.000 Nvme6n1 : 0.86 223.26 13.95 74.42 0.00 194251.16 22020.10 213909.50 00:24:24.000 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.000 Job: Nvme7n1 ended in about 0.89 seconds with error 00:24:24.000 Verification LBA range: start 0x0 length 0x400 00:24:24.000 Nvme7n1 : 0.89 144.08 9.01 72.04 0.00 263641.50 34393.29 243269.63 00:24:24.000 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.000 Job: Nvme8n1 ended in about 0.86 seconds with error 00:24:24.000 Verification LBA range: start 0x0 length 0x400 00:24:24.000 Nvme8n1 : 0.86 222.93 13.93 74.31 0.00 187085.62 21390.95 201326.59 00:24:24.000 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.000 Job: Nvme9n1 ended in about 0.86 seconds with error 00:24:24.000 Verification LBA range: start 0x0 length 0x400 00:24:24.000 Nvme9n1 : 0.86 222.64 13.91 74.21 0.00 183659.52 26738.69 216426.09 00:24:24.000 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.000 Job: Nvme10n1 ended in about 0.88 seconds with error 00:24:24.000 Verification LBA range: start 0x0 length 0x400 00:24:24.000 Nvme10n1 : 0.88 145.49 9.09 72.75 0.00 245843.56 21286.09 270113.18 00:24:24.000 =================================================================================================================== 00:24:24.000 Total : 1991.57 124.47 736.20 0.00 214643.99 3696.23 270113.18 00:24:24.000 [2024-04-26 08:58:41.134858] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:24.000 [2024-04-26 08:58:41.134890] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:24.000 [2024-04-26 08:58:41.135205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.135689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.135705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb310a0 with addr=10.0.0.2, port=4420 00:24:24.000 [2024-04-26 08:58:41.135718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb310a0 is same with the state(5) to be set 00:24:24.000 [2024-04-26 08:58:41.135735] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ee930 (9): Bad file descriptor 00:24:24.000 [2024-04-26 08:58:41.135750] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa04760 (9): Bad file descriptor 00:24:24.000 [2024-04-26 08:58:41.135762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa30fc0 (9): Bad file descriptor 00:24:24.000 [2024-04-26 08:58:41.136339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.136644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.136660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0f040 with addr=10.0.0.2, port=4420 00:24:24.000 [2024-04-26 08:58:41.136670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0f040 is same with the state(5) to be set 00:24:24.000 [2024-04-26 08:58:41.137038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.137413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.137426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2c660 with addr=10.0.0.2, port=4420 00:24:24.000 [2024-04-26 08:58:41.137436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2c660 is same with the state(5) to be set 00:24:24.000 [2024-04-26 08:58:41.137784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.138255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.138269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbbd10 with addr=10.0.0.2, port=4420 00:24:24.000 [2024-04-26 08:58:41.138279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbbd10 is same with the state(5) to be set 00:24:24.000 [2024-04-26 08:58:41.138751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.139111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.139126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9f8f0 with addr=10.0.0.2, port=4420 00:24:24.000 [2024-04-26 08:58:41.139136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9f8f0 is same with the state(5) to be set 00:24:24.000 [2024-04-26 08:58:41.139573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.139935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.139948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb3290 with addr=10.0.0.2, port=4420 00:24:24.000 [2024-04-26 08:58:41.139958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb3290 is same with the state(5) to be set 00:24:24.000 [2024-04-26 08:58:41.140375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.140780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.000 [2024-04-26 08:58:41.140793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb3c30 with addr=10.0.0.2, port=4420 00:24:24.000 [2024-04-26 08:58:41.140803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb3c30 is same with the state(5) to be set 00:24:24.000 [2024-04-26 08:58:41.140816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb310a0 (9): Bad file descriptor 00:24:24.000 [2024-04-26 08:58:41.140828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:24.000 [2024-04-26 08:58:41.140842] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:24.000 [2024-04-26 08:58:41.140853] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:24.000 [2024-04-26 08:58:41.140868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:24.000 [2024-04-26 08:58:41.140878] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:24.000 [2024-04-26 08:58:41.140886] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:24.000 [2024-04-26 08:58:41.140898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:24.000 [2024-04-26 08:58:41.140907] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:24.000 [2024-04-26 08:58:41.140916] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:24.000 [2024-04-26 08:58:41.140938] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.000 [2024-04-26 08:58:41.140953] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.000 [2024-04-26 08:58:41.140966] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.000 [2024-04-26 08:58:41.140978] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.000 [2024-04-26 08:58:41.141504] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.000 [2024-04-26 08:58:41.141519] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.000 [2024-04-26 08:58:41.141527] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.000 [2024-04-26 08:58:41.141538] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0f040 (9): Bad file descriptor 00:24:24.000 [2024-04-26 08:58:41.141550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2c660 (9): Bad file descriptor 00:24:24.000 [2024-04-26 08:58:41.141561] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbbd10 (9): Bad file descriptor 00:24:24.001 [2024-04-26 08:58:41.141573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9f8f0 (9): Bad file descriptor 00:24:24.001 [2024-04-26 08:58:41.141584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb3290 (9): Bad file descriptor 00:24:24.001 [2024-04-26 08:58:41.141596] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb3c30 (9): Bad file descriptor 00:24:24.001 [2024-04-26 08:58:41.141607] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:24.001 [2024-04-26 08:58:41.141615] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:24.001 [2024-04-26 08:58:41.141624] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:24.001 [2024-04-26 08:58:41.141943] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.001 [2024-04-26 08:58:41.141962] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:24.001 [2024-04-26 08:58:41.141971] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:24.001 [2024-04-26 08:58:41.141981] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:24.001 [2024-04-26 08:58:41.141992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:24.001 [2024-04-26 08:58:41.142002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:24.001 [2024-04-26 08:58:41.142014] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:24.001 [2024-04-26 08:58:41.142025] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:24.001 [2024-04-26 08:58:41.142034] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:24.001 [2024-04-26 08:58:41.142044] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:24.001 [2024-04-26 08:58:41.142055] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:24.001 [2024-04-26 08:58:41.142063] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:24.001 [2024-04-26 08:58:41.142072] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:24.001 [2024-04-26 08:58:41.142083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:24.001 [2024-04-26 08:58:41.142092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:24.001 [2024-04-26 08:58:41.142100] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:24.001 [2024-04-26 08:58:41.142111] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:24.001 [2024-04-26 08:58:41.142120] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:24.001 [2024-04-26 08:58:41.142128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:24.001 [2024-04-26 08:58:41.142169] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.001 [2024-04-26 08:58:41.142178] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.001 [2024-04-26 08:58:41.142186] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.001 [2024-04-26 08:58:41.142193] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.001 [2024-04-26 08:58:41.142201] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.001 [2024-04-26 08:58:41.142209] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.261 08:58:41 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:24.261 08:58:41 -- target/shutdown.sh@139 -- # sleep 1 00:24:25.641 08:58:42 -- target/shutdown.sh@142 -- # kill -9 2146216 00:24:25.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2146216) - No such process 00:24:25.641 08:58:42 -- target/shutdown.sh@142 -- # true 00:24:25.641 08:58:42 -- target/shutdown.sh@144 -- # stoptarget 00:24:25.641 08:58:42 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:25.641 08:58:42 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:25.641 08:58:42 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:25.641 08:58:42 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:25.641 08:58:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:25.641 08:58:42 -- nvmf/common.sh@117 -- # sync 00:24:25.641 08:58:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.641 08:58:42 -- nvmf/common.sh@120 -- # set +e 00:24:25.641 08:58:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.641 08:58:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.641 rmmod nvme_tcp 00:24:25.641 rmmod nvme_fabrics 00:24:25.641 rmmod nvme_keyring 00:24:25.641 08:58:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.641 08:58:42 -- nvmf/common.sh@124 -- # set -e 00:24:25.641 08:58:42 -- nvmf/common.sh@125 -- # return 0 00:24:25.641 08:58:42 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:24:25.641 08:58:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:25.641 08:58:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:25.641 08:58:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:25.641 08:58:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.641 08:58:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:25.641 08:58:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.641 08:58:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.641 08:58:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.559 08:58:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:27.559 00:24:27.559 real 0m8.225s 00:24:27.559 user 0m20.340s 00:24:27.559 sys 0m1.706s 00:24:27.559 08:58:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:27.559 08:58:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.559 ************************************ 00:24:27.559 END TEST nvmf_shutdown_tc3 00:24:27.559 ************************************ 00:24:27.559 08:58:44 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:27.559 00:24:27.559 real 0m33.950s 00:24:27.559 user 1m21.305s 00:24:27.559 sys 0m10.898s 00:24:27.559 08:58:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:27.559 08:58:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.559 ************************************ 00:24:27.559 END TEST nvmf_shutdown 00:24:27.559 ************************************ 00:24:27.559 08:58:44 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:24:27.559 08:58:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:27.559 08:58:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.819 08:58:44 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:24:27.819 08:58:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:27.819 08:58:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.819 08:58:44 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:24:27.819 08:58:44 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:27.819 08:58:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:27.819 08:58:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:27.819 08:58:44 -- common/autotest_common.sh@10 -- # set +x 00:24:27.819 ************************************ 00:24:27.819 START TEST nvmf_multicontroller 00:24:27.819 ************************************ 00:24:27.819 08:58:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:28.078 * Looking for test storage... 00:24:28.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.078 08:58:45 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.078 08:58:45 -- nvmf/common.sh@7 -- # uname -s 00:24:28.078 08:58:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.078 08:58:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.078 08:58:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.078 08:58:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.078 08:58:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.078 08:58:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.078 08:58:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.078 08:58:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.078 08:58:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.078 08:58:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.078 08:58:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:28.078 08:58:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:28.078 08:58:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.078 08:58:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.078 08:58:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.079 08:58:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.079 08:58:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.079 08:58:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.079 08:58:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.079 08:58:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.079 08:58:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.079 08:58:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.079 08:58:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.079 08:58:45 -- paths/export.sh@5 -- # export PATH 00:24:28.079 08:58:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.079 08:58:45 -- nvmf/common.sh@47 -- # : 0 00:24:28.079 08:58:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.079 08:58:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.079 08:58:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.079 08:58:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.079 08:58:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.079 08:58:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.079 08:58:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.079 08:58:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.079 08:58:45 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:28.079 08:58:45 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:28.079 08:58:45 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:28.079 08:58:45 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:28.079 08:58:45 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:28.079 08:58:45 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:28.079 08:58:45 -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:28.079 08:58:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:28.079 08:58:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.079 08:58:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:28.079 08:58:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:28.079 08:58:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:28.079 08:58:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.079 08:58:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.079 08:58:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.079 08:58:45 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:28.079 08:58:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:28.079 08:58:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.079 08:58:45 -- common/autotest_common.sh@10 -- # set +x 00:24:34.652 08:58:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:34.652 08:58:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.652 08:58:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.652 08:58:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.652 08:58:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.652 08:58:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.652 08:58:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.652 08:58:51 -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.652 08:58:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.652 08:58:51 -- nvmf/common.sh@296 -- # e810=() 00:24:34.652 08:58:51 -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.652 08:58:51 -- nvmf/common.sh@297 -- # x722=() 00:24:34.652 08:58:51 -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.652 08:58:51 -- nvmf/common.sh@298 -- # mlx=() 00:24:34.652 08:58:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.652 08:58:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.652 08:58:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.652 08:58:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.652 08:58:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.652 08:58:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.652 08:58:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.652 08:58:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.652 08:58:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.652 08:58:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.652 08:58:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.652 08:58:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.652 08:58:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.652 08:58:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:34.652 08:58:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.652 08:58:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.652 08:58:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:34.652 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:34.652 08:58:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.652 08:58:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:34.652 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:34.652 08:58:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.652 08:58:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.652 08:58:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.652 08:58:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:34.652 08:58:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.652 08:58:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:34.652 Found net devices under 0000:af:00.0: cvl_0_0 00:24:34.652 08:58:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.652 08:58:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.652 08:58:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.652 08:58:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:34.652 08:58:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.652 08:58:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:34.652 Found net devices under 0000:af:00.1: cvl_0_1 00:24:34.652 08:58:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.652 08:58:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:34.652 08:58:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:34.652 08:58:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:34.652 08:58:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:34.652 08:58:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.652 08:58:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.652 08:58:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.652 08:58:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:34.652 08:58:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.652 08:58:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.652 08:58:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:34.652 08:58:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.652 08:58:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.652 08:58:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:34.652 08:58:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:34.652 08:58:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.652 08:58:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.916 08:58:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.916 08:58:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.916 08:58:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:34.916 08:58:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.917 08:58:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.917 08:58:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.917 08:58:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:24:34.917 00:24:34.917 --- 10.0.0.2 ping statistics --- 00:24:34.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.917 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:24:34.917 08:58:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:24:34.917 00:24:34.917 --- 10.0.0.1 ping statistics --- 00:24:34.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.917 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:24:34.917 08:58:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.917 08:58:52 -- nvmf/common.sh@411 -- # return 0 00:24:34.917 08:58:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:34.917 08:58:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.917 08:58:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:34.917 08:58:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:34.917 08:58:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.917 08:58:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:34.917 08:58:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:34.917 08:58:52 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:34.917 08:58:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:34.917 08:58:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:34.917 08:58:52 -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 08:58:52 -- nvmf/common.sh@470 -- # nvmfpid=2150566 00:24:34.917 08:58:52 -- nvmf/common.sh@471 -- # waitforlisten 2150566 00:24:34.917 08:58:52 -- common/autotest_common.sh@817 -- # '[' -z 2150566 ']' 00:24:34.917 08:58:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.917 08:58:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:34.917 08:58:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.917 08:58:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:34.917 08:58:52 -- common/autotest_common.sh@10 -- # set +x 00:24:34.917 08:58:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:34.917 [2024-04-26 08:58:52.131380] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:24:34.917 [2024-04-26 08:58:52.131424] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.203 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.203 [2024-04-26 08:58:52.208799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:35.203 [2024-04-26 08:58:52.279471] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.203 [2024-04-26 08:58:52.279523] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.203 [2024-04-26 08:58:52.279534] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.203 [2024-04-26 08:58:52.279542] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.203 [2024-04-26 08:58:52.279549] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.203 [2024-04-26 08:58:52.279730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.203 [2024-04-26 08:58:52.279795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.203 [2024-04-26 08:58:52.279797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.770 08:58:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:35.770 08:58:52 -- common/autotest_common.sh@850 -- # return 0 00:24:35.770 08:58:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:35.770 08:58:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:35.770 08:58:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.770 08:58:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.770 08:58:52 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.770 08:58:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.770 08:58:52 -- common/autotest_common.sh@10 -- # set +x 00:24:35.770 [2024-04-26 08:58:52.991944] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.770 08:58:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.770 08:58:52 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:35.770 08:58:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.770 08:58:52 -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 Malloc0 00:24:36.029 08:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.029 08:58:53 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:36.029 08:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.029 08:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 08:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.029 08:58:53 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:36.029 08:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.029 08:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 08:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.029 08:58:53 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.029 08:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.029 08:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 [2024-04-26 08:58:53.052530] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.029 08:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.029 08:58:53 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:36.029 08:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.029 08:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 [2024-04-26 08:58:53.060465] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:36.029 08:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.029 08:58:53 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:36.029 08:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.029 08:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 Malloc1 00:24:36.029 08:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.029 08:58:53 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:36.029 08:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.029 08:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 08:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.029 08:58:53 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:36.029 08:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.029 08:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.029 08:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.029 08:58:53 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:36.030 08:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.030 08:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.030 08:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.030 08:58:53 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:36.030 08:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.030 08:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.030 08:58:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.030 08:58:53 -- host/multicontroller.sh@44 -- # bdevperf_pid=2150839 00:24:36.030 08:58:53 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:36.030 08:58:53 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.030 08:58:53 -- host/multicontroller.sh@47 -- # waitforlisten 2150839 /var/tmp/bdevperf.sock 00:24:36.030 08:58:53 -- common/autotest_common.sh@817 -- # '[' -z 2150839 ']' 00:24:36.030 08:58:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.030 08:58:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:36.030 08:58:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.030 08:58:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:36.030 08:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.968 08:58:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:36.968 08:58:53 -- common/autotest_common.sh@850 -- # return 0 00:24:36.968 08:58:53 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:36.968 08:58:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.968 08:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:36.968 NVMe0n1 00:24:36.968 08:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.968 08:58:54 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.968 08:58:54 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:36.968 08:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.968 08:58:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.227 08:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.227 1 00:24:37.227 08:58:54 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:37.227 08:58:54 -- common/autotest_common.sh@638 -- # local es=0 00:24:37.227 08:58:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:37.227 08:58:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:37.227 08:58:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:37.227 08:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.227 08:58:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.227 request: 00:24:37.227 { 00:24:37.227 "name": "NVMe0", 00:24:37.227 "trtype": "tcp", 00:24:37.227 "traddr": "10.0.0.2", 00:24:37.227 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:37.227 "hostaddr": "10.0.0.2", 00:24:37.227 "hostsvcid": "60000", 00:24:37.227 "adrfam": "ipv4", 00:24:37.227 "trsvcid": "4420", 00:24:37.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.227 "method": "bdev_nvme_attach_controller", 00:24:37.227 "req_id": 1 00:24:37.227 } 00:24:37.227 Got JSON-RPC error response 00:24:37.227 response: 00:24:37.227 { 00:24:37.227 "code": -114, 00:24:37.227 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:37.227 } 00:24:37.227 08:58:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:37.227 08:58:54 -- common/autotest_common.sh@641 -- # es=1 00:24:37.227 08:58:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:37.227 08:58:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:37.227 08:58:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:37.227 08:58:54 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:37.227 08:58:54 -- common/autotest_common.sh@638 -- # local es=0 00:24:37.227 08:58:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:37.227 08:58:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:37.227 08:58:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:37.227 08:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.227 08:58:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.227 request: 00:24:37.227 { 00:24:37.227 "name": "NVMe0", 00:24:37.227 "trtype": "tcp", 00:24:37.227 "traddr": "10.0.0.2", 00:24:37.227 "hostaddr": "10.0.0.2", 00:24:37.227 "hostsvcid": "60000", 00:24:37.227 "adrfam": "ipv4", 00:24:37.227 "trsvcid": "4420", 00:24:37.227 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:37.227 "method": "bdev_nvme_attach_controller", 00:24:37.227 "req_id": 1 00:24:37.227 } 00:24:37.227 Got JSON-RPC error response 00:24:37.227 response: 00:24:37.227 { 00:24:37.227 "code": -114, 00:24:37.227 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:37.227 } 00:24:37.227 08:58:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:37.227 08:58:54 -- common/autotest_common.sh@641 -- # es=1 00:24:37.227 08:58:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:37.227 08:58:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:37.227 08:58:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:37.227 08:58:54 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:37.227 08:58:54 -- common/autotest_common.sh@638 -- # local es=0 00:24:37.227 08:58:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:37.227 08:58:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:37.227 08:58:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:37.227 08:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.227 08:58:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.227 request: 00:24:37.227 { 00:24:37.227 "name": "NVMe0", 00:24:37.227 "trtype": "tcp", 00:24:37.227 "traddr": "10.0.0.2", 00:24:37.227 "hostaddr": "10.0.0.2", 00:24:37.227 "hostsvcid": "60000", 00:24:37.227 "adrfam": "ipv4", 00:24:37.227 "trsvcid": "4420", 00:24:37.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.227 "multipath": "disable", 00:24:37.227 "method": "bdev_nvme_attach_controller", 00:24:37.227 "req_id": 1 00:24:37.227 } 00:24:37.227 Got JSON-RPC error response 00:24:37.227 response: 00:24:37.227 { 00:24:37.227 "code": -114, 00:24:37.227 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:37.227 } 00:24:37.227 08:58:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:37.227 08:58:54 -- common/autotest_common.sh@641 -- # es=1 00:24:37.227 08:58:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:37.227 08:58:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:37.227 08:58:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:37.227 08:58:54 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:37.227 08:58:54 -- common/autotest_common.sh@638 -- # local es=0 00:24:37.227 08:58:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:37.227 08:58:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:37.227 08:58:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:37.227 08:58:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:37.227 08:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.227 08:58:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.227 request: 00:24:37.227 { 00:24:37.227 "name": "NVMe0", 00:24:37.227 "trtype": "tcp", 00:24:37.227 "traddr": "10.0.0.2", 00:24:37.227 "hostaddr": "10.0.0.2", 00:24:37.227 "hostsvcid": "60000", 00:24:37.227 "adrfam": "ipv4", 00:24:37.227 "trsvcid": "4420", 00:24:37.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.227 "multipath": "failover", 00:24:37.227 "method": "bdev_nvme_attach_controller", 00:24:37.227 "req_id": 1 00:24:37.227 } 00:24:37.227 Got JSON-RPC error response 00:24:37.227 response: 00:24:37.227 { 00:24:37.227 "code": -114, 00:24:37.227 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:37.227 } 00:24:37.227 08:58:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:37.227 08:58:54 -- common/autotest_common.sh@641 -- # es=1 00:24:37.227 08:58:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:37.227 08:58:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:37.227 08:58:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:37.227 08:58:54 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.227 08:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.227 08:58:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.227 00:24:37.227 08:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.227 08:58:54 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.227 08:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.227 08:58:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.228 08:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.228 08:58:54 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:37.228 08:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.228 08:58:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.486 00:24:37.486 08:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.486 08:58:54 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:37.486 08:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.486 08:58:54 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:37.486 08:58:54 -- common/autotest_common.sh@10 -- # set +x 00:24:37.486 08:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.486 08:58:54 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:37.486 08:58:54 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:38.424 0 00:24:38.425 08:58:55 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:38.425 08:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.425 08:58:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.425 08:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.425 08:58:55 -- host/multicontroller.sh@100 -- # killprocess 2150839 00:24:38.425 08:58:55 -- common/autotest_common.sh@936 -- # '[' -z 2150839 ']' 00:24:38.425 08:58:55 -- common/autotest_common.sh@940 -- # kill -0 2150839 00:24:38.425 08:58:55 -- common/autotest_common.sh@941 -- # uname 00:24:38.425 08:58:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:38.425 08:58:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2150839 00:24:38.684 08:58:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:38.684 08:58:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:38.684 08:58:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2150839' 00:24:38.684 killing process with pid 2150839 00:24:38.684 08:58:55 -- common/autotest_common.sh@955 -- # kill 2150839 00:24:38.684 08:58:55 -- common/autotest_common.sh@960 -- # wait 2150839 00:24:38.684 08:58:55 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.684 08:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.684 08:58:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.684 08:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.684 08:58:55 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:38.684 08:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.684 08:58:55 -- common/autotest_common.sh@10 -- # set +x 00:24:38.684 08:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.684 08:58:55 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:38.684 08:58:55 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:38.684 08:58:55 -- common/autotest_common.sh@1598 -- # read -r file 00:24:38.944 08:58:55 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:38.944 08:58:55 -- common/autotest_common.sh@1597 -- # sort -u 00:24:38.944 08:58:55 -- common/autotest_common.sh@1599 -- # cat 00:24:38.944 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:38.944 [2024-04-26 08:58:53.164615] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:24:38.944 [2024-04-26 08:58:53.164669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150839 ] 00:24:38.944 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.944 [2024-04-26 08:58:53.233284] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.944 [2024-04-26 08:58:53.301798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.944 [2024-04-26 08:58:54.489270] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name ff7a2a6c-e2ad-4982-9112-e5b0dba7af9a already exists 00:24:38.944 [2024-04-26 08:58:54.489302] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:ff7a2a6c-e2ad-4982-9112-e5b0dba7af9a alias for bdev NVMe1n1 00:24:38.944 [2024-04-26 08:58:54.489314] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:38.944 Running I/O for 1 seconds... 00:24:38.944 00:24:38.944 Latency(us) 00:24:38.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.944 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:38.944 NVMe0n1 : 1.01 22355.38 87.33 0.00 0.00 5707.64 3093.30 24851.25 00:24:38.944 =================================================================================================================== 00:24:38.944 Total : 22355.38 87.33 0.00 0.00 5707.64 3093.30 24851.25 00:24:38.944 Received shutdown signal, test time was about 1.000000 seconds 00:24:38.944 00:24:38.944 Latency(us) 00:24:38.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.944 =================================================================================================================== 00:24:38.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.944 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:38.944 08:58:55 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:38.944 08:58:55 -- common/autotest_common.sh@1598 -- # read -r file 00:24:38.944 08:58:55 -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:38.944 08:58:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:38.944 08:58:55 -- nvmf/common.sh@117 -- # sync 00:24:38.944 08:58:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.944 08:58:55 -- nvmf/common.sh@120 -- # set +e 00:24:38.944 08:58:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.944 08:58:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.944 rmmod nvme_tcp 00:24:38.944 rmmod nvme_fabrics 00:24:38.944 rmmod nvme_keyring 00:24:38.944 08:58:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.944 08:58:56 -- nvmf/common.sh@124 -- # set -e 00:24:38.944 08:58:56 -- nvmf/common.sh@125 -- # return 0 00:24:38.944 08:58:56 -- nvmf/common.sh@478 -- # '[' -n 2150566 ']' 00:24:38.944 08:58:56 -- nvmf/common.sh@479 -- # killprocess 2150566 00:24:38.944 08:58:56 -- common/autotest_common.sh@936 -- # '[' -z 2150566 ']' 00:24:38.944 08:58:56 -- common/autotest_common.sh@940 -- # kill -0 2150566 00:24:38.944 08:58:56 -- common/autotest_common.sh@941 -- # uname 00:24:38.944 08:58:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:38.944 08:58:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2150566 00:24:38.944 08:58:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:38.944 08:58:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:38.944 08:58:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2150566' 00:24:38.944 killing process with pid 2150566 00:24:38.944 08:58:56 -- common/autotest_common.sh@955 -- # kill 2150566 00:24:38.944 08:58:56 -- common/autotest_common.sh@960 -- # wait 2150566 00:24:39.204 08:58:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:39.204 08:58:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:39.204 08:58:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:39.204 08:58:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.204 08:58:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:39.204 08:58:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.204 08:58:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.204 08:58:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.740 08:58:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:41.740 00:24:41.740 real 0m13.378s 00:24:41.740 user 0m16.798s 00:24:41.740 sys 0m6.281s 00:24:41.740 08:58:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:41.740 08:58:58 -- common/autotest_common.sh@10 -- # set +x 00:24:41.740 ************************************ 00:24:41.740 END TEST nvmf_multicontroller 00:24:41.740 ************************************ 00:24:41.740 08:58:58 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:41.740 08:58:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:41.740 08:58:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:41.740 08:58:58 -- common/autotest_common.sh@10 -- # set +x 00:24:41.740 ************************************ 00:24:41.740 START TEST nvmf_aer 00:24:41.740 ************************************ 00:24:41.740 08:58:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:41.740 * Looking for test storage... 00:24:41.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.740 08:58:58 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.740 08:58:58 -- nvmf/common.sh@7 -- # uname -s 00:24:41.740 08:58:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.740 08:58:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.740 08:58:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.740 08:58:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.740 08:58:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.740 08:58:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.740 08:58:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.740 08:58:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.740 08:58:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.740 08:58:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.740 08:58:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:41.740 08:58:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:41.740 08:58:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.740 08:58:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.740 08:58:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.740 08:58:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.740 08:58:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.740 08:58:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.740 08:58:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.740 08:58:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.740 08:58:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.740 08:58:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.740 08:58:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.740 08:58:58 -- paths/export.sh@5 -- # export PATH 00:24:41.740 08:58:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.740 08:58:58 -- nvmf/common.sh@47 -- # : 0 00:24:41.740 08:58:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:41.740 08:58:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:41.740 08:58:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.740 08:58:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.740 08:58:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.740 08:58:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:41.740 08:58:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:41.740 08:58:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:41.740 08:58:58 -- host/aer.sh@11 -- # nvmftestinit 00:24:41.740 08:58:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:41.740 08:58:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.740 08:58:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:41.740 08:58:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:41.740 08:58:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:41.740 08:58:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.740 08:58:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.740 08:58:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.740 08:58:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:41.740 08:58:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:41.740 08:58:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:41.740 08:58:58 -- common/autotest_common.sh@10 -- # set +x 00:24:48.322 08:59:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:48.322 08:59:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:48.322 08:59:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:48.322 08:59:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:48.322 08:59:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:48.322 08:59:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:48.322 08:59:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:48.322 08:59:04 -- nvmf/common.sh@295 -- # net_devs=() 00:24:48.322 08:59:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:48.322 08:59:04 -- nvmf/common.sh@296 -- # e810=() 00:24:48.322 08:59:04 -- nvmf/common.sh@296 -- # local -ga e810 00:24:48.322 08:59:04 -- nvmf/common.sh@297 -- # x722=() 00:24:48.322 08:59:04 -- nvmf/common.sh@297 -- # local -ga x722 00:24:48.322 08:59:04 -- nvmf/common.sh@298 -- # mlx=() 00:24:48.322 08:59:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:48.322 08:59:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.322 08:59:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.322 08:59:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.322 08:59:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.322 08:59:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.322 08:59:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.322 08:59:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.322 08:59:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.322 08:59:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.322 08:59:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.322 08:59:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.322 08:59:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:48.322 08:59:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:48.322 08:59:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:48.322 08:59:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:48.322 08:59:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:48.322 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:48.322 08:59:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:48.322 08:59:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:48.322 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:48.322 08:59:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:48.322 08:59:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:48.322 08:59:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.322 08:59:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:48.322 08:59:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.322 08:59:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:48.322 Found net devices under 0000:af:00.0: cvl_0_0 00:24:48.322 08:59:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.322 08:59:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:48.322 08:59:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.322 08:59:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:48.322 08:59:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.322 08:59:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:48.322 Found net devices under 0000:af:00.1: cvl_0_1 00:24:48.322 08:59:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.322 08:59:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:48.322 08:59:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:48.322 08:59:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:48.322 08:59:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:48.322 08:59:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.322 08:59:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.322 08:59:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.322 08:59:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:48.322 08:59:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.322 08:59:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.322 08:59:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:48.322 08:59:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.322 08:59:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.322 08:59:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:48.322 08:59:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:48.322 08:59:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.322 08:59:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.322 08:59:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.322 08:59:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.322 08:59:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:48.322 08:59:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.322 08:59:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.322 08:59:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.322 08:59:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:48.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:24:48.322 00:24:48.322 --- 10.0.0.2 ping statistics --- 00:24:48.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.322 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:24:48.322 08:59:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:24:48.322 00:24:48.322 --- 10.0.0.1 ping statistics --- 00:24:48.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.322 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:24:48.322 08:59:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.322 08:59:05 -- nvmf/common.sh@411 -- # return 0 00:24:48.322 08:59:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:48.322 08:59:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.322 08:59:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:48.322 08:59:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:48.322 08:59:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.322 08:59:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:48.322 08:59:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:48.322 08:59:05 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:48.322 08:59:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:48.322 08:59:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:48.322 08:59:05 -- common/autotest_common.sh@10 -- # set +x 00:24:48.322 08:59:05 -- nvmf/common.sh@470 -- # nvmfpid=2155057 00:24:48.322 08:59:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:48.322 08:59:05 -- nvmf/common.sh@471 -- # waitforlisten 2155057 00:24:48.322 08:59:05 -- common/autotest_common.sh@817 -- # '[' -z 2155057 ']' 00:24:48.322 08:59:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.322 08:59:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:48.322 08:59:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.322 08:59:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:48.322 08:59:05 -- common/autotest_common.sh@10 -- # set +x 00:24:48.322 [2024-04-26 08:59:05.267876] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:24:48.322 [2024-04-26 08:59:05.267927] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.322 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.323 [2024-04-26 08:59:05.342860] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:48.323 [2024-04-26 08:59:05.415819] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.323 [2024-04-26 08:59:05.415854] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.323 [2024-04-26 08:59:05.415864] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.323 [2024-04-26 08:59:05.415873] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.323 [2024-04-26 08:59:05.415897] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.323 [2024-04-26 08:59:05.415945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.323 [2024-04-26 08:59:05.416039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.323 [2024-04-26 08:59:05.416123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:48.323 [2024-04-26 08:59:05.416125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.891 08:59:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:48.891 08:59:06 -- common/autotest_common.sh@850 -- # return 0 00:24:48.891 08:59:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:48.891 08:59:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:48.891 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:48.891 08:59:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.891 08:59:06 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:48.891 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.891 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:48.891 [2024-04-26 08:59:06.133356] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.150 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.150 08:59:06 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:49.150 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.150 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.150 Malloc0 00:24:49.150 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.150 08:59:06 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:49.150 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.150 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.150 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.150 08:59:06 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:49.150 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.150 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.150 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.150 08:59:06 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.150 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.150 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.150 [2024-04-26 08:59:06.187845] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.150 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.150 08:59:06 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:49.150 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.150 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.150 [2024-04-26 08:59:06.195628] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:49.150 [ 00:24:49.150 { 00:24:49.150 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:49.150 "subtype": "Discovery", 00:24:49.150 "listen_addresses": [], 00:24:49.150 "allow_any_host": true, 00:24:49.150 "hosts": [] 00:24:49.150 }, 00:24:49.150 { 00:24:49.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.150 "subtype": "NVMe", 00:24:49.150 "listen_addresses": [ 00:24:49.150 { 00:24:49.150 "transport": "TCP", 00:24:49.150 "trtype": "TCP", 00:24:49.150 "adrfam": "IPv4", 00:24:49.150 "traddr": "10.0.0.2", 00:24:49.150 "trsvcid": "4420" 00:24:49.150 } 00:24:49.150 ], 00:24:49.150 "allow_any_host": true, 00:24:49.150 "hosts": [], 00:24:49.150 "serial_number": "SPDK00000000000001", 00:24:49.150 "model_number": "SPDK bdev Controller", 00:24:49.150 "max_namespaces": 2, 00:24:49.150 "min_cntlid": 1, 00:24:49.150 "max_cntlid": 65519, 00:24:49.150 "namespaces": [ 00:24:49.150 { 00:24:49.150 "nsid": 1, 00:24:49.150 "bdev_name": "Malloc0", 00:24:49.150 "name": "Malloc0", 00:24:49.150 "nguid": "0090D5AB105C4768988E029B83DE9507", 00:24:49.150 "uuid": "0090d5ab-105c-4768-988e-029b83de9507" 00:24:49.150 } 00:24:49.150 ] 00:24:49.150 } 00:24:49.150 ] 00:24:49.150 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.150 08:59:06 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:49.150 08:59:06 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:49.150 08:59:06 -- host/aer.sh@33 -- # aerpid=2155122 00:24:49.150 08:59:06 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:49.150 08:59:06 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:49.150 08:59:06 -- common/autotest_common.sh@1251 -- # local i=0 00:24:49.150 08:59:06 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:49.150 08:59:06 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:24:49.150 08:59:06 -- common/autotest_common.sh@1254 -- # i=1 00:24:49.150 08:59:06 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:24:49.150 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.150 08:59:06 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:49.150 08:59:06 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:24:49.150 08:59:06 -- common/autotest_common.sh@1254 -- # i=2 00:24:49.150 08:59:06 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:24:49.409 08:59:06 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:49.409 08:59:06 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:49.409 08:59:06 -- common/autotest_common.sh@1262 -- # return 0 00:24:49.409 08:59:06 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:49.409 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.409 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.409 Malloc1 00:24:49.409 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.409 08:59:06 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:49.409 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.409 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.409 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.409 08:59:06 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:49.409 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.409 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.409 Asynchronous Event Request test 00:24:49.409 Attaching to 10.0.0.2 00:24:49.409 Attached to 10.0.0.2 00:24:49.409 Registering asynchronous event callbacks... 00:24:49.409 Starting namespace attribute notice tests for all controllers... 00:24:49.409 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:49.409 aer_cb - Changed Namespace 00:24:49.409 Cleaning up... 00:24:49.409 [ 00:24:49.409 { 00:24:49.409 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:49.409 "subtype": "Discovery", 00:24:49.409 "listen_addresses": [], 00:24:49.409 "allow_any_host": true, 00:24:49.409 "hosts": [] 00:24:49.409 }, 00:24:49.409 { 00:24:49.409 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.409 "subtype": "NVMe", 00:24:49.410 "listen_addresses": [ 00:24:49.410 { 00:24:49.410 "transport": "TCP", 00:24:49.410 "trtype": "TCP", 00:24:49.410 "adrfam": "IPv4", 00:24:49.410 "traddr": "10.0.0.2", 00:24:49.410 "trsvcid": "4420" 00:24:49.410 } 00:24:49.410 ], 00:24:49.410 "allow_any_host": true, 00:24:49.410 "hosts": [], 00:24:49.410 "serial_number": "SPDK00000000000001", 00:24:49.410 "model_number": "SPDK bdev Controller", 00:24:49.410 "max_namespaces": 2, 00:24:49.410 "min_cntlid": 1, 00:24:49.410 "max_cntlid": 65519, 00:24:49.410 "namespaces": [ 00:24:49.410 { 00:24:49.410 "nsid": 1, 00:24:49.410 "bdev_name": "Malloc0", 00:24:49.410 "name": "Malloc0", 00:24:49.410 "nguid": "0090D5AB105C4768988E029B83DE9507", 00:24:49.410 "uuid": "0090d5ab-105c-4768-988e-029b83de9507" 00:24:49.410 }, 00:24:49.410 { 00:24:49.410 "nsid": 2, 00:24:49.410 "bdev_name": "Malloc1", 00:24:49.410 "name": "Malloc1", 00:24:49.410 "nguid": "C08F2DD037F546C4A6385E55AC60030E", 00:24:49.410 "uuid": "c08f2dd0-37f5-46c4-a638-5e55ac60030e" 00:24:49.410 } 00:24:49.410 ] 00:24:49.410 } 00:24:49.410 ] 00:24:49.410 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.410 08:59:06 -- host/aer.sh@43 -- # wait 2155122 00:24:49.410 08:59:06 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:49.410 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.410 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.410 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.410 08:59:06 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:49.410 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.410 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.410 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.410 08:59:06 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.410 08:59:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.410 08:59:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.410 08:59:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.410 08:59:06 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:49.410 08:59:06 -- host/aer.sh@51 -- # nvmftestfini 00:24:49.410 08:59:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:49.410 08:59:06 -- nvmf/common.sh@117 -- # sync 00:24:49.410 08:59:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:49.410 08:59:06 -- nvmf/common.sh@120 -- # set +e 00:24:49.410 08:59:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:49.410 08:59:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:49.410 rmmod nvme_tcp 00:24:49.410 rmmod nvme_fabrics 00:24:49.410 rmmod nvme_keyring 00:24:49.410 08:59:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.410 08:59:06 -- nvmf/common.sh@124 -- # set -e 00:24:49.410 08:59:06 -- nvmf/common.sh@125 -- # return 0 00:24:49.410 08:59:06 -- nvmf/common.sh@478 -- # '[' -n 2155057 ']' 00:24:49.410 08:59:06 -- nvmf/common.sh@479 -- # killprocess 2155057 00:24:49.410 08:59:06 -- common/autotest_common.sh@936 -- # '[' -z 2155057 ']' 00:24:49.410 08:59:06 -- common/autotest_common.sh@940 -- # kill -0 2155057 00:24:49.410 08:59:06 -- common/autotest_common.sh@941 -- # uname 00:24:49.669 08:59:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:49.669 08:59:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2155057 00:24:49.669 08:59:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:49.669 08:59:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:49.669 08:59:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2155057' 00:24:49.669 killing process with pid 2155057 00:24:49.669 08:59:06 -- common/autotest_common.sh@955 -- # kill 2155057 00:24:49.669 [2024-04-26 08:59:06.706244] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:49.669 08:59:06 -- common/autotest_common.sh@960 -- # wait 2155057 00:24:49.669 08:59:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:49.669 08:59:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:49.669 08:59:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:49.669 08:59:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:49.669 08:59:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:49.669 08:59:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.669 08:59:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.669 08:59:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.214 08:59:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:52.214 00:24:52.214 real 0m10.386s 00:24:52.214 user 0m7.676s 00:24:52.214 sys 0m5.419s 00:24:52.214 08:59:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:52.214 08:59:08 -- common/autotest_common.sh@10 -- # set +x 00:24:52.214 ************************************ 00:24:52.214 END TEST nvmf_aer 00:24:52.214 ************************************ 00:24:52.214 08:59:09 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:52.214 08:59:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:52.214 08:59:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:52.214 08:59:09 -- common/autotest_common.sh@10 -- # set +x 00:24:52.214 ************************************ 00:24:52.214 START TEST nvmf_async_init 00:24:52.214 ************************************ 00:24:52.214 08:59:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:52.214 * Looking for test storage... 00:24:52.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.214 08:59:09 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.214 08:59:09 -- nvmf/common.sh@7 -- # uname -s 00:24:52.214 08:59:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.214 08:59:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.214 08:59:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.214 08:59:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.214 08:59:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.214 08:59:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.214 08:59:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.214 08:59:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.214 08:59:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.214 08:59:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.214 08:59:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:52.214 08:59:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:52.214 08:59:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.214 08:59:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.214 08:59:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.214 08:59:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.214 08:59:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.214 08:59:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.214 08:59:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.214 08:59:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.214 08:59:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.214 08:59:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.214 08:59:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.214 08:59:09 -- paths/export.sh@5 -- # export PATH 00:24:52.214 08:59:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.214 08:59:09 -- nvmf/common.sh@47 -- # : 0 00:24:52.214 08:59:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:52.215 08:59:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:52.215 08:59:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.215 08:59:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.215 08:59:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.215 08:59:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:52.215 08:59:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:52.215 08:59:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:52.215 08:59:09 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:52.215 08:59:09 -- host/async_init.sh@14 -- # null_block_size=512 00:24:52.215 08:59:09 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:52.215 08:59:09 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:52.215 08:59:09 -- host/async_init.sh@20 -- # uuidgen 00:24:52.215 08:59:09 -- host/async_init.sh@20 -- # tr -d - 00:24:52.215 08:59:09 -- host/async_init.sh@20 -- # nguid=3f628c328448489d98d16674d60a3be7 00:24:52.215 08:59:09 -- host/async_init.sh@22 -- # nvmftestinit 00:24:52.215 08:59:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:52.215 08:59:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.215 08:59:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:52.215 08:59:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:52.215 08:59:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:52.215 08:59:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.215 08:59:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.215 08:59:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.215 08:59:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:52.215 08:59:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:52.215 08:59:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:52.215 08:59:09 -- common/autotest_common.sh@10 -- # set +x 00:25:00.344 08:59:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:00.344 08:59:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.344 08:59:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.344 08:59:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.344 08:59:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.344 08:59:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.344 08:59:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.344 08:59:16 -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.344 08:59:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.344 08:59:16 -- nvmf/common.sh@296 -- # e810=() 00:25:00.344 08:59:16 -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.344 08:59:16 -- nvmf/common.sh@297 -- # x722=() 00:25:00.344 08:59:16 -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.344 08:59:16 -- nvmf/common.sh@298 -- # mlx=() 00:25:00.344 08:59:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.344 08:59:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.344 08:59:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.344 08:59:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.344 08:59:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.344 08:59:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.344 08:59:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.344 08:59:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.344 08:59:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.344 08:59:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.344 08:59:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.344 08:59:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.344 08:59:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.344 08:59:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.344 08:59:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.344 08:59:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.344 08:59:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:00.344 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:00.344 08:59:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.344 08:59:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:00.344 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:00.344 08:59:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.344 08:59:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.344 08:59:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.344 08:59:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:00.344 08:59:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.344 08:59:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:00.344 Found net devices under 0000:af:00.0: cvl_0_0 00:25:00.344 08:59:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.344 08:59:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.344 08:59:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.344 08:59:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:00.344 08:59:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.344 08:59:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:00.344 Found net devices under 0000:af:00.1: cvl_0_1 00:25:00.344 08:59:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.344 08:59:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:00.344 08:59:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:00.344 08:59:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:00.344 08:59:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.344 08:59:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.344 08:59:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.344 08:59:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.344 08:59:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.344 08:59:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.344 08:59:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.344 08:59:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.344 08:59:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.344 08:59:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.344 08:59:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.344 08:59:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.344 08:59:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.344 08:59:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.344 08:59:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.344 08:59:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.344 08:59:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.344 08:59:16 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.344 08:59:16 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.344 08:59:16 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:25:00.344 00:25:00.344 --- 10.0.0.2 ping statistics --- 00:25:00.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.344 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:25:00.344 08:59:16 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:25:00.344 00:25:00.344 --- 10.0.0.1 ping statistics --- 00:25:00.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.344 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:25:00.344 08:59:16 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.344 08:59:16 -- nvmf/common.sh@411 -- # return 0 00:25:00.344 08:59:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:00.344 08:59:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.344 08:59:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:00.344 08:59:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.344 08:59:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:00.344 08:59:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:00.344 08:59:16 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:00.344 08:59:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:00.344 08:59:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:00.344 08:59:16 -- common/autotest_common.sh@10 -- # set +x 00:25:00.344 08:59:16 -- nvmf/common.sh@470 -- # nvmfpid=2159071 00:25:00.344 08:59:16 -- nvmf/common.sh@471 -- # waitforlisten 2159071 00:25:00.344 08:59:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:00.344 08:59:16 -- common/autotest_common.sh@817 -- # '[' -z 2159071 ']' 00:25:00.344 08:59:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.344 08:59:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:00.344 08:59:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.344 08:59:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:00.344 08:59:16 -- common/autotest_common.sh@10 -- # set +x 00:25:00.344 [2024-04-26 08:59:16.560230] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:25:00.344 [2024-04-26 08:59:16.560275] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.345 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.345 [2024-04-26 08:59:16.634225] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.345 [2024-04-26 08:59:16.705269] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.345 [2024-04-26 08:59:16.705310] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.345 [2024-04-26 08:59:16.705319] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.345 [2024-04-26 08:59:16.705328] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.345 [2024-04-26 08:59:16.705336] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.345 [2024-04-26 08:59:16.705357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.345 08:59:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:00.345 08:59:17 -- common/autotest_common.sh@850 -- # return 0 00:25:00.345 08:59:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:00.345 08:59:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:00.345 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.345 08:59:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.345 08:59:17 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:00.345 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.345 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.345 [2024-04-26 08:59:17.396290] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.345 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.345 08:59:17 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:00.345 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.345 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.345 null0 00:25:00.345 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.345 08:59:17 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:00.345 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.345 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.345 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.345 08:59:17 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:00.345 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.345 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.345 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.345 08:59:17 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3f628c328448489d98d16674d60a3be7 00:25:00.345 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.345 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.345 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.345 08:59:17 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:00.345 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.345 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.345 [2024-04-26 08:59:17.436544] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.345 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.345 08:59:17 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:00.345 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.345 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.603 nvme0n1 00:25:00.603 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.603 08:59:17 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:00.603 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.603 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.603 [ 00:25:00.603 { 00:25:00.603 "name": "nvme0n1", 00:25:00.603 "aliases": [ 00:25:00.603 "3f628c32-8448-489d-98d1-6674d60a3be7" 00:25:00.603 ], 00:25:00.603 "product_name": "NVMe disk", 00:25:00.603 "block_size": 512, 00:25:00.603 "num_blocks": 2097152, 00:25:00.603 "uuid": "3f628c32-8448-489d-98d1-6674d60a3be7", 00:25:00.603 "assigned_rate_limits": { 00:25:00.603 "rw_ios_per_sec": 0, 00:25:00.603 "rw_mbytes_per_sec": 0, 00:25:00.603 "r_mbytes_per_sec": 0, 00:25:00.603 "w_mbytes_per_sec": 0 00:25:00.603 }, 00:25:00.603 "claimed": false, 00:25:00.603 "zoned": false, 00:25:00.603 "supported_io_types": { 00:25:00.603 "read": true, 00:25:00.603 "write": true, 00:25:00.603 "unmap": false, 00:25:00.603 "write_zeroes": true, 00:25:00.603 "flush": true, 00:25:00.603 "reset": true, 00:25:00.603 "compare": true, 00:25:00.603 "compare_and_write": true, 00:25:00.603 "abort": true, 00:25:00.603 "nvme_admin": true, 00:25:00.603 "nvme_io": true 00:25:00.604 }, 00:25:00.604 "memory_domains": [ 00:25:00.604 { 00:25:00.604 "dma_device_id": "system", 00:25:00.604 "dma_device_type": 1 00:25:00.604 } 00:25:00.604 ], 00:25:00.604 "driver_specific": { 00:25:00.604 "nvme": [ 00:25:00.604 { 00:25:00.604 "trid": { 00:25:00.604 "trtype": "TCP", 00:25:00.604 "adrfam": "IPv4", 00:25:00.604 "traddr": "10.0.0.2", 00:25:00.604 "trsvcid": "4420", 00:25:00.604 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:00.604 }, 00:25:00.604 "ctrlr_data": { 00:25:00.604 "cntlid": 1, 00:25:00.604 "vendor_id": "0x8086", 00:25:00.604 "model_number": "SPDK bdev Controller", 00:25:00.604 "serial_number": "00000000000000000000", 00:25:00.604 "firmware_revision": "24.05", 00:25:00.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:00.604 "oacs": { 00:25:00.604 "security": 0, 00:25:00.604 "format": 0, 00:25:00.604 "firmware": 0, 00:25:00.604 "ns_manage": 0 00:25:00.604 }, 00:25:00.604 "multi_ctrlr": true, 00:25:00.604 "ana_reporting": false 00:25:00.604 }, 00:25:00.604 "vs": { 00:25:00.604 "nvme_version": "1.3" 00:25:00.604 }, 00:25:00.604 "ns_data": { 00:25:00.604 "id": 1, 00:25:00.604 "can_share": true 00:25:00.604 } 00:25:00.604 } 00:25:00.604 ], 00:25:00.604 "mp_policy": "active_passive" 00:25:00.604 } 00:25:00.604 } 00:25:00.604 ] 00:25:00.604 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.604 08:59:17 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:00.604 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.604 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.604 [2024-04-26 08:59:17.685017] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.604 [2024-04-26 08:59:17.685088] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cb0e0 (9): Bad file descriptor 00:25:00.604 [2024-04-26 08:59:17.816527] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:00.604 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.604 08:59:17 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:00.604 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.604 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.604 [ 00:25:00.604 { 00:25:00.604 "name": "nvme0n1", 00:25:00.604 "aliases": [ 00:25:00.604 "3f628c32-8448-489d-98d1-6674d60a3be7" 00:25:00.604 ], 00:25:00.604 "product_name": "NVMe disk", 00:25:00.604 "block_size": 512, 00:25:00.604 "num_blocks": 2097152, 00:25:00.604 "uuid": "3f628c32-8448-489d-98d1-6674d60a3be7", 00:25:00.604 "assigned_rate_limits": { 00:25:00.604 "rw_ios_per_sec": 0, 00:25:00.604 "rw_mbytes_per_sec": 0, 00:25:00.604 "r_mbytes_per_sec": 0, 00:25:00.604 "w_mbytes_per_sec": 0 00:25:00.604 }, 00:25:00.604 "claimed": false, 00:25:00.604 "zoned": false, 00:25:00.604 "supported_io_types": { 00:25:00.604 "read": true, 00:25:00.604 "write": true, 00:25:00.604 "unmap": false, 00:25:00.604 "write_zeroes": true, 00:25:00.604 "flush": true, 00:25:00.604 "reset": true, 00:25:00.604 "compare": true, 00:25:00.604 "compare_and_write": true, 00:25:00.604 "abort": true, 00:25:00.604 "nvme_admin": true, 00:25:00.604 "nvme_io": true 00:25:00.604 }, 00:25:00.604 "memory_domains": [ 00:25:00.604 { 00:25:00.604 "dma_device_id": "system", 00:25:00.604 "dma_device_type": 1 00:25:00.604 } 00:25:00.604 ], 00:25:00.604 "driver_specific": { 00:25:00.604 "nvme": [ 00:25:00.604 { 00:25:00.604 "trid": { 00:25:00.604 "trtype": "TCP", 00:25:00.604 "adrfam": "IPv4", 00:25:00.604 "traddr": "10.0.0.2", 00:25:00.604 "trsvcid": "4420", 00:25:00.604 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:00.604 }, 00:25:00.604 "ctrlr_data": { 00:25:00.604 "cntlid": 2, 00:25:00.604 "vendor_id": "0x8086", 00:25:00.604 "model_number": "SPDK bdev Controller", 00:25:00.604 "serial_number": "00000000000000000000", 00:25:00.604 "firmware_revision": "24.05", 00:25:00.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:00.604 "oacs": { 00:25:00.604 "security": 0, 00:25:00.604 "format": 0, 00:25:00.604 "firmware": 0, 00:25:00.604 "ns_manage": 0 00:25:00.604 }, 00:25:00.604 "multi_ctrlr": true, 00:25:00.604 "ana_reporting": false 00:25:00.604 }, 00:25:00.604 "vs": { 00:25:00.604 "nvme_version": "1.3" 00:25:00.604 }, 00:25:00.604 "ns_data": { 00:25:00.604 "id": 1, 00:25:00.604 "can_share": true 00:25:00.604 } 00:25:00.604 } 00:25:00.604 ], 00:25:00.604 "mp_policy": "active_passive" 00:25:00.604 } 00:25:00.604 } 00:25:00.604 ] 00:25:00.604 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.604 08:59:17 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.604 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.604 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.604 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.604 08:59:17 -- host/async_init.sh@53 -- # mktemp 00:25:00.863 08:59:17 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.eJnGRAsMqI 00:25:00.863 08:59:17 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:00.863 08:59:17 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.eJnGRAsMqI 00:25:00.863 08:59:17 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:00.863 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.863 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.863 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.863 08:59:17 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:00.863 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.863 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.863 [2024-04-26 08:59:17.873609] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:00.863 [2024-04-26 08:59:17.873731] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:00.863 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.863 08:59:17 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eJnGRAsMqI 00:25:00.863 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.863 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.863 [2024-04-26 08:59:17.881631] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:00.863 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.863 08:59:17 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.eJnGRAsMqI 00:25:00.863 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.863 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.863 [2024-04-26 08:59:17.889652] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:00.863 [2024-04-26 08:59:17.889691] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:00.863 nvme0n1 00:25:00.863 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.863 08:59:17 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:00.863 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.863 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.863 [ 00:25:00.863 { 00:25:00.863 "name": "nvme0n1", 00:25:00.863 "aliases": [ 00:25:00.863 "3f628c32-8448-489d-98d1-6674d60a3be7" 00:25:00.863 ], 00:25:00.863 "product_name": "NVMe disk", 00:25:00.863 "block_size": 512, 00:25:00.863 "num_blocks": 2097152, 00:25:00.863 "uuid": "3f628c32-8448-489d-98d1-6674d60a3be7", 00:25:00.863 "assigned_rate_limits": { 00:25:00.863 "rw_ios_per_sec": 0, 00:25:00.863 "rw_mbytes_per_sec": 0, 00:25:00.863 "r_mbytes_per_sec": 0, 00:25:00.863 "w_mbytes_per_sec": 0 00:25:00.863 }, 00:25:00.863 "claimed": false, 00:25:00.863 "zoned": false, 00:25:00.863 "supported_io_types": { 00:25:00.863 "read": true, 00:25:00.863 "write": true, 00:25:00.863 "unmap": false, 00:25:00.863 "write_zeroes": true, 00:25:00.863 "flush": true, 00:25:00.863 "reset": true, 00:25:00.863 "compare": true, 00:25:00.863 "compare_and_write": true, 00:25:00.863 "abort": true, 00:25:00.863 "nvme_admin": true, 00:25:00.863 "nvme_io": true 00:25:00.863 }, 00:25:00.863 "memory_domains": [ 00:25:00.863 { 00:25:00.863 "dma_device_id": "system", 00:25:00.863 "dma_device_type": 1 00:25:00.863 } 00:25:00.863 ], 00:25:00.863 "driver_specific": { 00:25:00.863 "nvme": [ 00:25:00.863 { 00:25:00.863 "trid": { 00:25:00.863 "trtype": "TCP", 00:25:00.863 "adrfam": "IPv4", 00:25:00.863 "traddr": "10.0.0.2", 00:25:00.863 "trsvcid": "4421", 00:25:00.864 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:00.864 }, 00:25:00.864 "ctrlr_data": { 00:25:00.864 "cntlid": 3, 00:25:00.864 "vendor_id": "0x8086", 00:25:00.864 "model_number": "SPDK bdev Controller", 00:25:00.864 "serial_number": "00000000000000000000", 00:25:00.864 "firmware_revision": "24.05", 00:25:00.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:00.864 "oacs": { 00:25:00.864 "security": 0, 00:25:00.864 "format": 0, 00:25:00.864 "firmware": 0, 00:25:00.864 "ns_manage": 0 00:25:00.864 }, 00:25:00.864 "multi_ctrlr": true, 00:25:00.864 "ana_reporting": false 00:25:00.864 }, 00:25:00.864 "vs": { 00:25:00.864 "nvme_version": "1.3" 00:25:00.864 }, 00:25:00.864 "ns_data": { 00:25:00.864 "id": 1, 00:25:00.864 "can_share": true 00:25:00.864 } 00:25:00.864 } 00:25:00.864 ], 00:25:00.864 "mp_policy": "active_passive" 00:25:00.864 } 00:25:00.864 } 00:25:00.864 ] 00:25:00.864 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.864 08:59:17 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.864 08:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.864 08:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:00.864 08:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.864 08:59:17 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.eJnGRAsMqI 00:25:00.864 08:59:17 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:00.864 08:59:17 -- host/async_init.sh@78 -- # nvmftestfini 00:25:00.864 08:59:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:00.864 08:59:17 -- nvmf/common.sh@117 -- # sync 00:25:00.864 08:59:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:00.864 08:59:17 -- nvmf/common.sh@120 -- # set +e 00:25:00.864 08:59:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:00.864 08:59:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:00.864 rmmod nvme_tcp 00:25:00.864 rmmod nvme_fabrics 00:25:00.864 rmmod nvme_keyring 00:25:00.864 08:59:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:00.864 08:59:18 -- nvmf/common.sh@124 -- # set -e 00:25:00.864 08:59:18 -- nvmf/common.sh@125 -- # return 0 00:25:00.864 08:59:18 -- nvmf/common.sh@478 -- # '[' -n 2159071 ']' 00:25:00.864 08:59:18 -- nvmf/common.sh@479 -- # killprocess 2159071 00:25:00.864 08:59:18 -- common/autotest_common.sh@936 -- # '[' -z 2159071 ']' 00:25:00.864 08:59:18 -- common/autotest_common.sh@940 -- # kill -0 2159071 00:25:00.864 08:59:18 -- common/autotest_common.sh@941 -- # uname 00:25:00.864 08:59:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.864 08:59:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2159071 00:25:01.123 08:59:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:01.123 08:59:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:01.123 08:59:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2159071' 00:25:01.123 killing process with pid 2159071 00:25:01.123 08:59:18 -- common/autotest_common.sh@955 -- # kill 2159071 00:25:01.123 [2024-04-26 08:59:18.129794] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:01.123 [2024-04-26 08:59:18.129821] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:01.123 08:59:18 -- common/autotest_common.sh@960 -- # wait 2159071 00:25:01.123 08:59:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:01.123 08:59:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:01.123 08:59:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:01.123 08:59:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.123 08:59:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.123 08:59:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.123 08:59:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.123 08:59:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.661 08:59:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.661 00:25:03.661 real 0m11.182s 00:25:03.661 user 0m3.855s 00:25:03.661 sys 0m5.900s 00:25:03.661 08:59:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:03.661 08:59:20 -- common/autotest_common.sh@10 -- # set +x 00:25:03.661 ************************************ 00:25:03.661 END TEST nvmf_async_init 00:25:03.661 ************************************ 00:25:03.661 08:59:20 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:03.661 08:59:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:03.661 08:59:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:03.661 08:59:20 -- common/autotest_common.sh@10 -- # set +x 00:25:03.661 ************************************ 00:25:03.661 START TEST dma 00:25:03.661 ************************************ 00:25:03.661 08:59:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:03.661 * Looking for test storage... 00:25:03.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.661 08:59:20 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.661 08:59:20 -- nvmf/common.sh@7 -- # uname -s 00:25:03.661 08:59:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.661 08:59:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.661 08:59:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.661 08:59:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.661 08:59:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.661 08:59:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.661 08:59:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.661 08:59:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.661 08:59:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.661 08:59:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.661 08:59:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:03.661 08:59:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:03.661 08:59:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.662 08:59:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.662 08:59:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.662 08:59:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.662 08:59:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.662 08:59:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.662 08:59:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.662 08:59:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.662 08:59:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.662 08:59:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.662 08:59:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.662 08:59:20 -- paths/export.sh@5 -- # export PATH 00:25:03.662 08:59:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.662 08:59:20 -- nvmf/common.sh@47 -- # : 0 00:25:03.662 08:59:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.662 08:59:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.662 08:59:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.662 08:59:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.662 08:59:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.662 08:59:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.662 08:59:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.662 08:59:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.662 08:59:20 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:03.662 08:59:20 -- host/dma.sh@13 -- # exit 0 00:25:03.662 00:25:03.662 real 0m0.134s 00:25:03.662 user 0m0.061s 00:25:03.662 sys 0m0.083s 00:25:03.662 08:59:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:03.662 08:59:20 -- common/autotest_common.sh@10 -- # set +x 00:25:03.662 ************************************ 00:25:03.662 END TEST dma 00:25:03.662 ************************************ 00:25:03.662 08:59:20 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:03.662 08:59:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:03.662 08:59:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:03.662 08:59:20 -- common/autotest_common.sh@10 -- # set +x 00:25:03.921 ************************************ 00:25:03.921 START TEST nvmf_identify 00:25:03.921 ************************************ 00:25:03.921 08:59:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:03.921 * Looking for test storage... 00:25:03.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.921 08:59:21 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.921 08:59:21 -- nvmf/common.sh@7 -- # uname -s 00:25:03.921 08:59:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.921 08:59:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.921 08:59:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.921 08:59:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.921 08:59:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.921 08:59:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.921 08:59:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.921 08:59:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.921 08:59:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.921 08:59:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.921 08:59:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:03.921 08:59:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:03.921 08:59:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.921 08:59:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.921 08:59:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.921 08:59:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.921 08:59:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.921 08:59:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.921 08:59:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.921 08:59:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.921 08:59:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.921 08:59:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.921 08:59:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.921 08:59:21 -- paths/export.sh@5 -- # export PATH 00:25:03.921 08:59:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.921 08:59:21 -- nvmf/common.sh@47 -- # : 0 00:25:03.921 08:59:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.921 08:59:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.921 08:59:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.921 08:59:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.921 08:59:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.921 08:59:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.921 08:59:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.921 08:59:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.921 08:59:21 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.921 08:59:21 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:03.921 08:59:21 -- host/identify.sh@14 -- # nvmftestinit 00:25:03.921 08:59:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:03.921 08:59:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.921 08:59:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:03.921 08:59:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:03.921 08:59:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:03.921 08:59:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.921 08:59:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.921 08:59:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.921 08:59:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:03.921 08:59:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:03.921 08:59:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.921 08:59:21 -- common/autotest_common.sh@10 -- # set +x 00:25:10.514 08:59:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:10.514 08:59:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.514 08:59:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.515 08:59:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.515 08:59:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.515 08:59:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.515 08:59:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.515 08:59:27 -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.515 08:59:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.515 08:59:27 -- nvmf/common.sh@296 -- # e810=() 00:25:10.515 08:59:27 -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.515 08:59:27 -- nvmf/common.sh@297 -- # x722=() 00:25:10.515 08:59:27 -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.515 08:59:27 -- nvmf/common.sh@298 -- # mlx=() 00:25:10.515 08:59:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.515 08:59:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.515 08:59:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.515 08:59:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.515 08:59:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.515 08:59:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.515 08:59:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.515 08:59:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.515 08:59:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.515 08:59:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.515 08:59:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.515 08:59:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.515 08:59:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.515 08:59:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:10.515 08:59:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.515 08:59:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.515 08:59:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:10.515 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:10.515 08:59:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.515 08:59:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:10.515 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:10.515 08:59:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.515 08:59:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.515 08:59:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.515 08:59:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:10.515 08:59:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.515 08:59:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:10.515 Found net devices under 0000:af:00.0: cvl_0_0 00:25:10.515 08:59:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.515 08:59:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.515 08:59:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.515 08:59:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:10.515 08:59:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.515 08:59:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:10.515 Found net devices under 0000:af:00.1: cvl_0_1 00:25:10.515 08:59:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.515 08:59:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:10.515 08:59:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:10.515 08:59:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:10.515 08:59:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:10.515 08:59:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.515 08:59:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.515 08:59:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.515 08:59:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.515 08:59:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.515 08:59:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.515 08:59:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.515 08:59:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.515 08:59:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.515 08:59:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.515 08:59:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.515 08:59:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.515 08:59:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.774 08:59:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.774 08:59:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.774 08:59:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.774 08:59:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.774 08:59:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.774 08:59:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.774 08:59:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:10.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:25:10.774 00:25:10.774 --- 10.0.0.2 ping statistics --- 00:25:10.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.774 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:25:10.774 08:59:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:25:10.774 00:25:10.774 --- 10.0.0.1 ping statistics --- 00:25:10.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.774 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:25:10.774 08:59:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.774 08:59:27 -- nvmf/common.sh@411 -- # return 0 00:25:10.774 08:59:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:10.774 08:59:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.775 08:59:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:10.775 08:59:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:10.775 08:59:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.775 08:59:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:10.775 08:59:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:10.775 08:59:28 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:10.775 08:59:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:10.775 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:10.775 08:59:28 -- host/identify.sh@19 -- # nvmfpid=2163110 00:25:10.775 08:59:28 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:10.775 08:59:28 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:10.775 08:59:28 -- host/identify.sh@23 -- # waitforlisten 2163110 00:25:10.775 08:59:28 -- common/autotest_common.sh@817 -- # '[' -z 2163110 ']' 00:25:10.775 08:59:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.775 08:59:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:10.775 08:59:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.775 08:59:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:10.775 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.033 [2024-04-26 08:59:28.071795] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:25:11.033 [2024-04-26 08:59:28.071850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.033 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.033 [2024-04-26 08:59:28.148242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.033 [2024-04-26 08:59:28.217416] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.033 [2024-04-26 08:59:28.217466] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.033 [2024-04-26 08:59:28.217476] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.033 [2024-04-26 08:59:28.217484] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.033 [2024-04-26 08:59:28.217508] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.033 [2024-04-26 08:59:28.217550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.033 [2024-04-26 08:59:28.217643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.033 [2024-04-26 08:59:28.217729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.033 [2024-04-26 08:59:28.217731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.969 08:59:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:11.969 08:59:28 -- common/autotest_common.sh@850 -- # return 0 00:25:11.969 08:59:28 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.969 08:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.969 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.969 [2024-04-26 08:59:28.884176] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.969 08:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.969 08:59:28 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:11.969 08:59:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:11.969 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.969 08:59:28 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:11.969 08:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.969 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.969 Malloc0 00:25:11.969 08:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.969 08:59:28 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:11.969 08:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.969 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.969 08:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.969 08:59:28 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:11.969 08:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.969 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.969 08:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.969 08:59:28 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:11.969 08:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.969 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.969 [2024-04-26 08:59:28.986951] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.969 08:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.969 08:59:28 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:11.969 08:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.969 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.969 08:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.969 08:59:28 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:11.969 08:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:11.969 08:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:11.969 [2024-04-26 08:59:29.002742] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:11.969 [ 00:25:11.969 { 00:25:11.969 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:11.969 "subtype": "Discovery", 00:25:11.969 "listen_addresses": [ 00:25:11.969 { 00:25:11.969 "transport": "TCP", 00:25:11.969 "trtype": "TCP", 00:25:11.969 "adrfam": "IPv4", 00:25:11.969 "traddr": "10.0.0.2", 00:25:11.969 "trsvcid": "4420" 00:25:11.969 } 00:25:11.969 ], 00:25:11.969 "allow_any_host": true, 00:25:11.969 "hosts": [] 00:25:11.969 }, 00:25:11.969 { 00:25:11.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.969 "subtype": "NVMe", 00:25:11.969 "listen_addresses": [ 00:25:11.969 { 00:25:11.969 "transport": "TCP", 00:25:11.969 "trtype": "TCP", 00:25:11.969 "adrfam": "IPv4", 00:25:11.969 "traddr": "10.0.0.2", 00:25:11.969 "trsvcid": "4420" 00:25:11.969 } 00:25:11.969 ], 00:25:11.969 "allow_any_host": true, 00:25:11.969 "hosts": [], 00:25:11.969 "serial_number": "SPDK00000000000001", 00:25:11.969 "model_number": "SPDK bdev Controller", 00:25:11.969 "max_namespaces": 32, 00:25:11.969 "min_cntlid": 1, 00:25:11.969 "max_cntlid": 65519, 00:25:11.969 "namespaces": [ 00:25:11.969 { 00:25:11.969 "nsid": 1, 00:25:11.969 "bdev_name": "Malloc0", 00:25:11.969 "name": "Malloc0", 00:25:11.969 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:11.969 "eui64": "ABCDEF0123456789", 00:25:11.969 "uuid": "2f0a67b6-c4d9-45ac-8103-694668acb716" 00:25:11.969 } 00:25:11.969 ] 00:25:11.969 } 00:25:11.969 ] 00:25:11.969 08:59:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:11.969 08:59:29 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:11.969 [2024-04-26 08:59:29.044699] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:25:11.969 [2024-04-26 08:59:29.044739] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163394 ] 00:25:11.969 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.969 [2024-04-26 08:59:29.076806] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:11.969 [2024-04-26 08:59:29.076854] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:11.969 [2024-04-26 08:59:29.076861] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:11.969 [2024-04-26 08:59:29.076876] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:11.969 [2024-04-26 08:59:29.076885] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:11.969 [2024-04-26 08:59:29.077282] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:11.969 [2024-04-26 08:59:29.077319] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfeed80 0 00:25:11.969 [2024-04-26 08:59:29.091465] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:11.969 [2024-04-26 08:59:29.091485] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:11.970 [2024-04-26 08:59:29.091491] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:11.970 [2024-04-26 08:59:29.091496] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:11.970 [2024-04-26 08:59:29.091540] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.091548] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.091553] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfeed80) 00:25:11.970 [2024-04-26 08:59:29.091567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:11.970 [2024-04-26 08:59:29.091588] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058a60, cid 0, qid 0 00:25:11.970 [2024-04-26 08:59:29.099459] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.970 [2024-04-26 08:59:29.099468] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.970 [2024-04-26 08:59:29.099472] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.099478] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058a60) on tqpair=0xfeed80 00:25:11.970 [2024-04-26 08:59:29.099493] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:11.970 [2024-04-26 08:59:29.099501] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:11.970 [2024-04-26 08:59:29.099507] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:11.970 [2024-04-26 08:59:29.099533] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.099538] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.099543] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfeed80) 00:25:11.970 [2024-04-26 08:59:29.099550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.970 [2024-04-26 08:59:29.099564] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058a60, cid 0, qid 0 00:25:11.970 [2024-04-26 08:59:29.099809] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.970 [2024-04-26 08:59:29.099819] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.970 [2024-04-26 08:59:29.099824] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.099829] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058a60) on tqpair=0xfeed80 00:25:11.970 [2024-04-26 08:59:29.099838] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:11.970 [2024-04-26 08:59:29.099848] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:11.970 [2024-04-26 08:59:29.099856] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.099861] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.099866] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfeed80) 00:25:11.970 [2024-04-26 08:59:29.099875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.970 [2024-04-26 08:59:29.099893] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058a60, cid 0, qid 0 00:25:11.970 [2024-04-26 08:59:29.100039] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.970 [2024-04-26 08:59:29.100047] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.970 [2024-04-26 08:59:29.100051] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.100056] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058a60) on tqpair=0xfeed80 00:25:11.970 [2024-04-26 08:59:29.100063] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:11.970 [2024-04-26 08:59:29.100073] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:11.970 [2024-04-26 08:59:29.100082] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.100086] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.100091] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfeed80) 00:25:11.970 [2024-04-26 08:59:29.100098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.970 [2024-04-26 08:59:29.100112] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058a60, cid 0, qid 0 00:25:11.970 [2024-04-26 08:59:29.100257] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.970 [2024-04-26 08:59:29.100264] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.970 [2024-04-26 08:59:29.100269] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.100273] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058a60) on tqpair=0xfeed80 00:25:11.970 [2024-04-26 08:59:29.100281] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:11.970 [2024-04-26 08:59:29.100292] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.100297] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.100302] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfeed80) 00:25:11.970 [2024-04-26 08:59:29.100309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.970 [2024-04-26 08:59:29.100322] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058a60, cid 0, qid 0 00:25:11.970 [2024-04-26 08:59:29.100467] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.970 [2024-04-26 08:59:29.100476] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.970 [2024-04-26 08:59:29.100481] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.100486] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058a60) on tqpair=0xfeed80 00:25:11.970 [2024-04-26 08:59:29.100493] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:11.970 [2024-04-26 08:59:29.100500] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:11.970 [2024-04-26 08:59:29.100509] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:11.970 [2024-04-26 08:59:29.100616] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:11.970 [2024-04-26 08:59:29.100623] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:11.970 [2024-04-26 08:59:29.100633] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.100642] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.100647] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfeed80) 00:25:11.970 [2024-04-26 08:59:29.100655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.970 [2024-04-26 08:59:29.100669] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058a60, cid 0, qid 0 00:25:11.970 [2024-04-26 08:59:29.100974] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.970 [2024-04-26 08:59:29.100981] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.970 [2024-04-26 08:59:29.100985] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.100990] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058a60) on tqpair=0xfeed80 00:25:11.970 [2024-04-26 08:59:29.100997] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:11.970 [2024-04-26 08:59:29.101007] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.101012] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.101017] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfeed80) 00:25:11.970 [2024-04-26 08:59:29.101024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.970 [2024-04-26 08:59:29.101035] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058a60, cid 0, qid 0 00:25:11.970 [2024-04-26 08:59:29.101173] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.970 [2024-04-26 08:59:29.101181] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.970 [2024-04-26 08:59:29.101186] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.970 [2024-04-26 08:59:29.101190] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058a60) on tqpair=0xfeed80 00:25:11.970 [2024-04-26 08:59:29.101197] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:11.970 [2024-04-26 08:59:29.101203] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:11.971 [2024-04-26 08:59:29.101213] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:11.971 [2024-04-26 08:59:29.101223] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:11.971 [2024-04-26 08:59:29.101236] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.101241] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfeed80) 00:25:11.971 [2024-04-26 08:59:29.101249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.971 [2024-04-26 08:59:29.101263] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058a60, cid 0, qid 0 00:25:11.971 [2024-04-26 08:59:29.101589] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.971 [2024-04-26 08:59:29.101596] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.971 [2024-04-26 08:59:29.101602] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.101607] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfeed80): datao=0, datal=4096, cccid=0 00:25:11.971 [2024-04-26 08:59:29.101613] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1058a60) on tqpair(0xfeed80): expected_datao=0, payload_size=4096 00:25:11.971 [2024-04-26 08:59:29.101619] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.101627] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.101634] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.101850] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.971 [2024-04-26 08:59:29.101856] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.971 [2024-04-26 08:59:29.101861] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.101866] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058a60) on tqpair=0xfeed80 00:25:11.971 [2024-04-26 08:59:29.101875] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:11.971 [2024-04-26 08:59:29.101882] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:11.971 [2024-04-26 08:59:29.101887] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:11.971 [2024-04-26 08:59:29.101895] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:11.971 [2024-04-26 08:59:29.101900] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:11.971 [2024-04-26 08:59:29.101906] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:11.971 [2024-04-26 08:59:29.101917] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:11.971 [2024-04-26 08:59:29.101924] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.101929] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.101934] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfeed80) 00:25:11.971 [2024-04-26 08:59:29.101942] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:11.971 [2024-04-26 08:59:29.101954] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058a60, cid 0, qid 0 00:25:11.971 [2024-04-26 08:59:29.102250] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.971 [2024-04-26 08:59:29.102256] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.971 [2024-04-26 08:59:29.102261] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102265] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058a60) on tqpair=0xfeed80 00:25:11.971 [2024-04-26 08:59:29.102274] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102279] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102283] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfeed80) 00:25:11.971 [2024-04-26 08:59:29.102290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.971 [2024-04-26 08:59:29.102297] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102302] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102306] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfeed80) 00:25:11.971 [2024-04-26 08:59:29.102312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.971 [2024-04-26 08:59:29.102319] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102324] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102328] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfeed80) 00:25:11.971 [2024-04-26 08:59:29.102334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.971 [2024-04-26 08:59:29.102341] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102348] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102352] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfeed80) 00:25:11.971 [2024-04-26 08:59:29.102358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.971 [2024-04-26 08:59:29.102364] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:11.971 [2024-04-26 08:59:29.102376] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:11.971 [2024-04-26 08:59:29.102384] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102388] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfeed80) 00:25:11.971 [2024-04-26 08:59:29.102395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.971 [2024-04-26 08:59:29.102408] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058a60, cid 0, qid 0 00:25:11.971 [2024-04-26 08:59:29.102414] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058bc0, cid 1, qid 0 00:25:11.971 [2024-04-26 08:59:29.102419] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058d20, cid 2, qid 0 00:25:11.971 [2024-04-26 08:59:29.102425] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058e80, cid 3, qid 0 00:25:11.971 [2024-04-26 08:59:29.102430] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058fe0, cid 4, qid 0 00:25:11.971 [2024-04-26 08:59:29.102609] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.971 [2024-04-26 08:59:29.102620] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.971 [2024-04-26 08:59:29.102624] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102629] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058fe0) on tqpair=0xfeed80 00:25:11.971 [2024-04-26 08:59:29.102637] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:11.971 [2024-04-26 08:59:29.102645] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:11.971 [2024-04-26 08:59:29.102658] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102663] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfeed80) 00:25:11.971 [2024-04-26 08:59:29.102670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.971 [2024-04-26 08:59:29.102684] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058fe0, cid 4, qid 0 00:25:11.971 [2024-04-26 08:59:29.102831] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.971 [2024-04-26 08:59:29.102839] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.971 [2024-04-26 08:59:29.102843] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.102848] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfeed80): datao=0, datal=4096, cccid=4 00:25:11.971 [2024-04-26 08:59:29.102854] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1058fe0) on tqpair(0xfeed80): expected_datao=0, payload_size=4096 00:25:11.971 [2024-04-26 08:59:29.102859] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.103106] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.103111] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.971 [2024-04-26 08:59:29.103230] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.972 [2024-04-26 08:59:29.103237] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.972 [2024-04-26 08:59:29.103244] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.103249] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058fe0) on tqpair=0xfeed80 00:25:11.972 [2024-04-26 08:59:29.103265] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:11.972 [2024-04-26 08:59:29.103287] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.103293] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfeed80) 00:25:11.972 [2024-04-26 08:59:29.103300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.972 [2024-04-26 08:59:29.103308] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.103313] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.103317] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfeed80) 00:25:11.972 [2024-04-26 08:59:29.103324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.972 [2024-04-26 08:59:29.103342] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058fe0, cid 4, qid 0 00:25:11.972 [2024-04-26 08:59:29.103348] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1059140, cid 5, qid 0 00:25:11.972 [2024-04-26 08:59:29.107460] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.972 [2024-04-26 08:59:29.107474] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.972 [2024-04-26 08:59:29.107478] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.107483] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfeed80): datao=0, datal=1024, cccid=4 00:25:11.972 [2024-04-26 08:59:29.107489] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1058fe0) on tqpair(0xfeed80): expected_datao=0, payload_size=1024 00:25:11.972 [2024-04-26 08:59:29.107495] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.107502] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.107507] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.107513] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.972 [2024-04-26 08:59:29.107519] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.972 [2024-04-26 08:59:29.107523] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.107528] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1059140) on tqpair=0xfeed80 00:25:11.972 [2024-04-26 08:59:29.146462] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.972 [2024-04-26 08:59:29.146472] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.972 [2024-04-26 08:59:29.146476] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.146481] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058fe0) on tqpair=0xfeed80 00:25:11.972 [2024-04-26 08:59:29.146494] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.146500] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfeed80) 00:25:11.972 [2024-04-26 08:59:29.146507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.972 [2024-04-26 08:59:29.146525] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058fe0, cid 4, qid 0 00:25:11.972 [2024-04-26 08:59:29.146757] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.972 [2024-04-26 08:59:29.146766] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.972 [2024-04-26 08:59:29.146771] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.146775] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfeed80): datao=0, datal=3072, cccid=4 00:25:11.972 [2024-04-26 08:59:29.146784] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1058fe0) on tqpair(0xfeed80): expected_datao=0, payload_size=3072 00:25:11.972 [2024-04-26 08:59:29.146790] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.147052] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.147057] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.187680] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:11.972 [2024-04-26 08:59:29.187694] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:11.972 [2024-04-26 08:59:29.187699] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.187704] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058fe0) on tqpair=0xfeed80 00:25:11.972 [2024-04-26 08:59:29.187718] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.187723] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfeed80) 00:25:11.972 [2024-04-26 08:59:29.187730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:11.972 [2024-04-26 08:59:29.187750] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058fe0, cid 4, qid 0 00:25:11.972 [2024-04-26 08:59:29.187897] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:11.972 [2024-04-26 08:59:29.187905] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:11.972 [2024-04-26 08:59:29.187909] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.187914] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfeed80): datao=0, datal=8, cccid=4 00:25:11.972 [2024-04-26 08:59:29.187920] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1058fe0) on tqpair(0xfeed80): expected_datao=0, payload_size=8 00:25:11.972 [2024-04-26 08:59:29.187926] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.187933] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:11.972 [2024-04-26 08:59:29.187938] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.245 [2024-04-26 08:59:29.228701] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.245 [2024-04-26 08:59:29.228715] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.245 [2024-04-26 08:59:29.228720] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.245 [2024-04-26 08:59:29.228725] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058fe0) on tqpair=0xfeed80 00:25:12.245 ===================================================== 00:25:12.245 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:12.245 ===================================================== 00:25:12.245 Controller Capabilities/Features 00:25:12.245 ================================ 00:25:12.245 Vendor ID: 0000 00:25:12.245 Subsystem Vendor ID: 0000 00:25:12.245 Serial Number: .................... 00:25:12.245 Model Number: ........................................ 00:25:12.245 Firmware Version: 24.05 00:25:12.245 Recommended Arb Burst: 0 00:25:12.245 IEEE OUI Identifier: 00 00 00 00:25:12.245 Multi-path I/O 00:25:12.245 May have multiple subsystem ports: No 00:25:12.245 May have multiple controllers: No 00:25:12.245 Associated with SR-IOV VF: No 00:25:12.245 Max Data Transfer Size: 131072 00:25:12.245 Max Number of Namespaces: 0 00:25:12.245 Max Number of I/O Queues: 1024 00:25:12.245 NVMe Specification Version (VS): 1.3 00:25:12.245 NVMe Specification Version (Identify): 1.3 00:25:12.245 Maximum Queue Entries: 128 00:25:12.245 Contiguous Queues Required: Yes 00:25:12.245 Arbitration Mechanisms Supported 00:25:12.245 Weighted Round Robin: Not Supported 00:25:12.245 Vendor Specific: Not Supported 00:25:12.245 Reset Timeout: 15000 ms 00:25:12.245 Doorbell Stride: 4 bytes 00:25:12.245 NVM Subsystem Reset: Not Supported 00:25:12.245 Command Sets Supported 00:25:12.245 NVM Command Set: Supported 00:25:12.245 Boot Partition: Not Supported 00:25:12.245 Memory Page Size Minimum: 4096 bytes 00:25:12.245 Memory Page Size Maximum: 4096 bytes 00:25:12.245 Persistent Memory Region: Not Supported 00:25:12.245 Optional Asynchronous Events Supported 00:25:12.245 Namespace Attribute Notices: Not Supported 00:25:12.245 Firmware Activation Notices: Not Supported 00:25:12.245 ANA Change Notices: Not Supported 00:25:12.245 PLE Aggregate Log Change Notices: Not Supported 00:25:12.245 LBA Status Info Alert Notices: Not Supported 00:25:12.245 EGE Aggregate Log Change Notices: Not Supported 00:25:12.245 Normal NVM Subsystem Shutdown event: Not Supported 00:25:12.245 Zone Descriptor Change Notices: Not Supported 00:25:12.245 Discovery Log Change Notices: Supported 00:25:12.245 Controller Attributes 00:25:12.245 128-bit Host Identifier: Not Supported 00:25:12.245 Non-Operational Permissive Mode: Not Supported 00:25:12.245 NVM Sets: Not Supported 00:25:12.245 Read Recovery Levels: Not Supported 00:25:12.245 Endurance Groups: Not Supported 00:25:12.245 Predictable Latency Mode: Not Supported 00:25:12.245 Traffic Based Keep ALive: Not Supported 00:25:12.245 Namespace Granularity: Not Supported 00:25:12.245 SQ Associations: Not Supported 00:25:12.245 UUID List: Not Supported 00:25:12.245 Multi-Domain Subsystem: Not Supported 00:25:12.245 Fixed Capacity Management: Not Supported 00:25:12.245 Variable Capacity Management: Not Supported 00:25:12.245 Delete Endurance Group: Not Supported 00:25:12.245 Delete NVM Set: Not Supported 00:25:12.245 Extended LBA Formats Supported: Not Supported 00:25:12.245 Flexible Data Placement Supported: Not Supported 00:25:12.245 00:25:12.245 Controller Memory Buffer Support 00:25:12.245 ================================ 00:25:12.245 Supported: No 00:25:12.245 00:25:12.245 Persistent Memory Region Support 00:25:12.245 ================================ 00:25:12.245 Supported: No 00:25:12.245 00:25:12.245 Admin Command Set Attributes 00:25:12.245 ============================ 00:25:12.245 Security Send/Receive: Not Supported 00:25:12.245 Format NVM: Not Supported 00:25:12.245 Firmware Activate/Download: Not Supported 00:25:12.245 Namespace Management: Not Supported 00:25:12.245 Device Self-Test: Not Supported 00:25:12.245 Directives: Not Supported 00:25:12.245 NVMe-MI: Not Supported 00:25:12.245 Virtualization Management: Not Supported 00:25:12.245 Doorbell Buffer Config: Not Supported 00:25:12.245 Get LBA Status Capability: Not Supported 00:25:12.245 Command & Feature Lockdown Capability: Not Supported 00:25:12.245 Abort Command Limit: 1 00:25:12.245 Async Event Request Limit: 4 00:25:12.245 Number of Firmware Slots: N/A 00:25:12.245 Firmware Slot 1 Read-Only: N/A 00:25:12.245 Firmware Activation Without Reset: N/A 00:25:12.245 Multiple Update Detection Support: N/A 00:25:12.245 Firmware Update Granularity: No Information Provided 00:25:12.245 Per-Namespace SMART Log: No 00:25:12.245 Asymmetric Namespace Access Log Page: Not Supported 00:25:12.245 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:12.245 Command Effects Log Page: Not Supported 00:25:12.245 Get Log Page Extended Data: Supported 00:25:12.245 Telemetry Log Pages: Not Supported 00:25:12.245 Persistent Event Log Pages: Not Supported 00:25:12.245 Supported Log Pages Log Page: May Support 00:25:12.245 Commands Supported & Effects Log Page: Not Supported 00:25:12.245 Feature Identifiers & Effects Log Page:May Support 00:25:12.245 NVMe-MI Commands & Effects Log Page: May Support 00:25:12.245 Data Area 4 for Telemetry Log: Not Supported 00:25:12.245 Error Log Page Entries Supported: 128 00:25:12.245 Keep Alive: Not Supported 00:25:12.245 00:25:12.245 NVM Command Set Attributes 00:25:12.245 ========================== 00:25:12.245 Submission Queue Entry Size 00:25:12.245 Max: 1 00:25:12.245 Min: 1 00:25:12.245 Completion Queue Entry Size 00:25:12.245 Max: 1 00:25:12.245 Min: 1 00:25:12.245 Number of Namespaces: 0 00:25:12.245 Compare Command: Not Supported 00:25:12.245 Write Uncorrectable Command: Not Supported 00:25:12.245 Dataset Management Command: Not Supported 00:25:12.245 Write Zeroes Command: Not Supported 00:25:12.245 Set Features Save Field: Not Supported 00:25:12.245 Reservations: Not Supported 00:25:12.245 Timestamp: Not Supported 00:25:12.245 Copy: Not Supported 00:25:12.245 Volatile Write Cache: Not Present 00:25:12.245 Atomic Write Unit (Normal): 1 00:25:12.245 Atomic Write Unit (PFail): 1 00:25:12.245 Atomic Compare & Write Unit: 1 00:25:12.245 Fused Compare & Write: Supported 00:25:12.245 Scatter-Gather List 00:25:12.245 SGL Command Set: Supported 00:25:12.245 SGL Keyed: Supported 00:25:12.245 SGL Bit Bucket Descriptor: Not Supported 00:25:12.245 SGL Metadata Pointer: Not Supported 00:25:12.245 Oversized SGL: Not Supported 00:25:12.245 SGL Metadata Address: Not Supported 00:25:12.245 SGL Offset: Supported 00:25:12.245 Transport SGL Data Block: Not Supported 00:25:12.246 Replay Protected Memory Block: Not Supported 00:25:12.246 00:25:12.246 Firmware Slot Information 00:25:12.246 ========================= 00:25:12.246 Active slot: 0 00:25:12.246 00:25:12.246 00:25:12.246 Error Log 00:25:12.246 ========= 00:25:12.246 00:25:12.246 Active Namespaces 00:25:12.246 ================= 00:25:12.246 Discovery Log Page 00:25:12.246 ================== 00:25:12.246 Generation Counter: 2 00:25:12.246 Number of Records: 2 00:25:12.246 Record Format: 0 00:25:12.246 00:25:12.246 Discovery Log Entry 0 00:25:12.246 ---------------------- 00:25:12.246 Transport Type: 3 (TCP) 00:25:12.246 Address Family: 1 (IPv4) 00:25:12.246 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:12.246 Entry Flags: 00:25:12.246 Duplicate Returned Information: 1 00:25:12.246 Explicit Persistent Connection Support for Discovery: 1 00:25:12.246 Transport Requirements: 00:25:12.246 Secure Channel: Not Required 00:25:12.246 Port ID: 0 (0x0000) 00:25:12.246 Controller ID: 65535 (0xffff) 00:25:12.246 Admin Max SQ Size: 128 00:25:12.246 Transport Service Identifier: 4420 00:25:12.246 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:12.246 Transport Address: 10.0.0.2 00:25:12.246 Discovery Log Entry 1 00:25:12.246 ---------------------- 00:25:12.246 Transport Type: 3 (TCP) 00:25:12.246 Address Family: 1 (IPv4) 00:25:12.246 Subsystem Type: 2 (NVM Subsystem) 00:25:12.246 Entry Flags: 00:25:12.246 Duplicate Returned Information: 0 00:25:12.246 Explicit Persistent Connection Support for Discovery: 0 00:25:12.246 Transport Requirements: 00:25:12.246 Secure Channel: Not Required 00:25:12.246 Port ID: 0 (0x0000) 00:25:12.246 Controller ID: 65535 (0xffff) 00:25:12.246 Admin Max SQ Size: 128 00:25:12.246 Transport Service Identifier: 4420 00:25:12.246 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:12.246 Transport Address: 10.0.0.2 [2024-04-26 08:59:29.228815] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:12.246 [2024-04-26 08:59:29.228830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.246 [2024-04-26 08:59:29.228838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.246 [2024-04-26 08:59:29.228845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.246 [2024-04-26 08:59:29.228852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.246 [2024-04-26 08:59:29.228861] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.228866] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.228871] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfeed80) 00:25:12.246 [2024-04-26 08:59:29.228879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.246 [2024-04-26 08:59:29.228895] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058e80, cid 3, qid 0 00:25:12.246 [2024-04-26 08:59:29.229056] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.246 [2024-04-26 08:59:29.229068] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.246 [2024-04-26 08:59:29.229072] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229077] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058e80) on tqpair=0xfeed80 00:25:12.246 [2024-04-26 08:59:29.229086] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229091] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229096] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfeed80) 00:25:12.246 [2024-04-26 08:59:29.229103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.246 [2024-04-26 08:59:29.229120] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058e80, cid 3, qid 0 00:25:12.246 [2024-04-26 08:59:29.229293] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.246 [2024-04-26 08:59:29.229300] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.246 [2024-04-26 08:59:29.229305] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229310] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058e80) on tqpair=0xfeed80 00:25:12.246 [2024-04-26 08:59:29.229317] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:12.246 [2024-04-26 08:59:29.229323] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:12.246 [2024-04-26 08:59:29.229335] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229340] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229344] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfeed80) 00:25:12.246 [2024-04-26 08:59:29.229352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.246 [2024-04-26 08:59:29.229365] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058e80, cid 3, qid 0 00:25:12.246 [2024-04-26 08:59:29.229519] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.246 [2024-04-26 08:59:29.229527] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.246 [2024-04-26 08:59:29.229532] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229537] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058e80) on tqpair=0xfeed80 00:25:12.246 [2024-04-26 08:59:29.229550] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229555] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229560] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfeed80) 00:25:12.246 [2024-04-26 08:59:29.229567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.246 [2024-04-26 08:59:29.229581] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058e80, cid 3, qid 0 00:25:12.246 [2024-04-26 08:59:29.229725] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.246 [2024-04-26 08:59:29.229732] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.246 [2024-04-26 08:59:29.229736] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229741] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058e80) on tqpair=0xfeed80 00:25:12.246 [2024-04-26 08:59:29.229753] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229758] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229763] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfeed80) 00:25:12.246 [2024-04-26 08:59:29.229770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.246 [2024-04-26 08:59:29.229786] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058e80, cid 3, qid 0 00:25:12.246 [2024-04-26 08:59:29.229927] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.246 [2024-04-26 08:59:29.229934] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.246 [2024-04-26 08:59:29.229939] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229944] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058e80) on tqpair=0xfeed80 00:25:12.246 [2024-04-26 08:59:29.229956] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229961] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.229965] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfeed80) 00:25:12.246 [2024-04-26 08:59:29.229973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.246 [2024-04-26 08:59:29.229986] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058e80, cid 3, qid 0 00:25:12.246 [2024-04-26 08:59:29.230129] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.246 [2024-04-26 08:59:29.230136] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.246 [2024-04-26 08:59:29.230140] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.230145] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058e80) on tqpair=0xfeed80 00:25:12.246 [2024-04-26 08:59:29.230157] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.230162] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.230167] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfeed80) 00:25:12.246 [2024-04-26 08:59:29.230174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.246 [2024-04-26 08:59:29.230186] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058e80, cid 3, qid 0 00:25:12.246 [2024-04-26 08:59:29.230328] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.246 [2024-04-26 08:59:29.230335] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.246 [2024-04-26 08:59:29.230339] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.230344] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058e80) on tqpair=0xfeed80 00:25:12.246 [2024-04-26 08:59:29.230356] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.230361] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.230366] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfeed80) 00:25:12.246 [2024-04-26 08:59:29.230373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.246 [2024-04-26 08:59:29.230385] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058e80, cid 3, qid 0 00:25:12.246 [2024-04-26 08:59:29.234459] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.246 [2024-04-26 08:59:29.234472] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.246 [2024-04-26 08:59:29.234477] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.234482] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058e80) on tqpair=0xfeed80 00:25:12.246 [2024-04-26 08:59:29.234495] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.234500] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.234505] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfeed80) 00:25:12.246 [2024-04-26 08:59:29.234513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.246 [2024-04-26 08:59:29.234528] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1058e80, cid 3, qid 0 00:25:12.246 [2024-04-26 08:59:29.234751] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.246 [2024-04-26 08:59:29.234760] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.246 [2024-04-26 08:59:29.234765] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.234770] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1058e80) on tqpair=0xfeed80 00:25:12.246 [2024-04-26 08:59:29.234780] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:25:12.246 00:25:12.246 08:59:29 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:12.246 [2024-04-26 08:59:29.274372] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:25:12.246 [2024-04-26 08:59:29.274412] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163402 ] 00:25:12.246 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.246 [2024-04-26 08:59:29.305503] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:12.246 [2024-04-26 08:59:29.305546] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:12.246 [2024-04-26 08:59:29.305552] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:12.246 [2024-04-26 08:59:29.305564] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:12.246 [2024-04-26 08:59:29.305573] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:12.246 [2024-04-26 08:59:29.306105] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:12.246 [2024-04-26 08:59:29.306130] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xeffd80 0 00:25:12.246 [2024-04-26 08:59:29.320461] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:12.246 [2024-04-26 08:59:29.320494] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:12.246 [2024-04-26 08:59:29.320500] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:12.246 [2024-04-26 08:59:29.320504] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:12.246 [2024-04-26 08:59:29.320539] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.320545] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.320550] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeffd80) 00:25:12.246 [2024-04-26 08:59:29.320561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:12.246 [2024-04-26 08:59:29.320578] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a60, cid 0, qid 0 00:25:12.246 [2024-04-26 08:59:29.328459] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.246 [2024-04-26 08:59:29.328467] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.246 [2024-04-26 08:59:29.328472] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.328477] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a60) on tqpair=0xeffd80 00:25:12.246 [2024-04-26 08:59:29.328489] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:12.246 [2024-04-26 08:59:29.328496] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:12.246 [2024-04-26 08:59:29.328503] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:12.246 [2024-04-26 08:59:29.328517] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.328522] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.328527] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeffd80) 00:25:12.246 [2024-04-26 08:59:29.328534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.246 [2024-04-26 08:59:29.328548] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a60, cid 0, qid 0 00:25:12.246 [2024-04-26 08:59:29.328786] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.246 [2024-04-26 08:59:29.328795] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.246 [2024-04-26 08:59:29.328800] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.246 [2024-04-26 08:59:29.328805] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a60) on tqpair=0xeffd80 00:25:12.246 [2024-04-26 08:59:29.328811] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:12.247 [2024-04-26 08:59:29.328822] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:12.247 [2024-04-26 08:59:29.328831] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.328835] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.328840] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.328848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.247 [2024-04-26 08:59:29.328861] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a60, cid 0, qid 0 00:25:12.247 [2024-04-26 08:59:29.329006] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.247 [2024-04-26 08:59:29.329013] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.247 [2024-04-26 08:59:29.329018] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329023] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a60) on tqpair=0xeffd80 00:25:12.247 [2024-04-26 08:59:29.329029] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:12.247 [2024-04-26 08:59:29.329039] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:12.247 [2024-04-26 08:59:29.329047] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329051] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329056] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.329063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.247 [2024-04-26 08:59:29.329076] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a60, cid 0, qid 0 00:25:12.247 [2024-04-26 08:59:29.329216] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.247 [2024-04-26 08:59:29.329224] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.247 [2024-04-26 08:59:29.329228] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329233] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a60) on tqpair=0xeffd80 00:25:12.247 [2024-04-26 08:59:29.329239] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:12.247 [2024-04-26 08:59:29.329251] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329256] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329264] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.329271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.247 [2024-04-26 08:59:29.329284] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a60, cid 0, qid 0 00:25:12.247 [2024-04-26 08:59:29.329418] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.247 [2024-04-26 08:59:29.329426] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.247 [2024-04-26 08:59:29.329430] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329435] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a60) on tqpair=0xeffd80 00:25:12.247 [2024-04-26 08:59:29.329441] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:12.247 [2024-04-26 08:59:29.329447] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:12.247 [2024-04-26 08:59:29.329463] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:12.247 [2024-04-26 08:59:29.329570] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:12.247 [2024-04-26 08:59:29.329575] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:12.247 [2024-04-26 08:59:29.329584] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329589] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329593] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.329601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.247 [2024-04-26 08:59:29.329615] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a60, cid 0, qid 0 00:25:12.247 [2024-04-26 08:59:29.329753] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.247 [2024-04-26 08:59:29.329761] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.247 [2024-04-26 08:59:29.329765] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329770] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a60) on tqpair=0xeffd80 00:25:12.247 [2024-04-26 08:59:29.329776] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:12.247 [2024-04-26 08:59:29.329787] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329792] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329796] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.329804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.247 [2024-04-26 08:59:29.329816] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a60, cid 0, qid 0 00:25:12.247 [2024-04-26 08:59:29.329955] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.247 [2024-04-26 08:59:29.329962] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.247 [2024-04-26 08:59:29.329967] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.329972] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a60) on tqpair=0xeffd80 00:25:12.247 [2024-04-26 08:59:29.329977] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:12.247 [2024-04-26 08:59:29.329983] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:12.247 [2024-04-26 08:59:29.329996] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:12.247 [2024-04-26 08:59:29.330005] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:12.247 [2024-04-26 08:59:29.330017] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330022] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.330030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.247 [2024-04-26 08:59:29.330044] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a60, cid 0, qid 0 00:25:12.247 [2024-04-26 08:59:29.330208] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.247 [2024-04-26 08:59:29.330215] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.247 [2024-04-26 08:59:29.330220] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330225] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeffd80): datao=0, datal=4096, cccid=0 00:25:12.247 [2024-04-26 08:59:29.330230] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69a60) on tqpair(0xeffd80): expected_datao=0, payload_size=4096 00:25:12.247 [2024-04-26 08:59:29.330236] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330511] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330516] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330631] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.247 [2024-04-26 08:59:29.330639] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.247 [2024-04-26 08:59:29.330643] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330648] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a60) on tqpair=0xeffd80 00:25:12.247 [2024-04-26 08:59:29.330657] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:12.247 [2024-04-26 08:59:29.330663] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:12.247 [2024-04-26 08:59:29.330668] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:12.247 [2024-04-26 08:59:29.330673] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:12.247 [2024-04-26 08:59:29.330679] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:12.247 [2024-04-26 08:59:29.330685] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:12.247 [2024-04-26 08:59:29.330696] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:12.247 [2024-04-26 08:59:29.330704] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330709] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330714] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.330722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:12.247 [2024-04-26 08:59:29.330736] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a60, cid 0, qid 0 00:25:12.247 [2024-04-26 08:59:29.330876] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.247 [2024-04-26 08:59:29.330883] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.247 [2024-04-26 08:59:29.330888] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330892] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69a60) on tqpair=0xeffd80 00:25:12.247 [2024-04-26 08:59:29.330903] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330908] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330912] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.330919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.247 [2024-04-26 08:59:29.330926] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330931] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330935] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.330941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.247 [2024-04-26 08:59:29.330948] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330953] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330957] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.330964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.247 [2024-04-26 08:59:29.330971] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330975] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.330980] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.330986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.247 [2024-04-26 08:59:29.330992] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:12.247 [2024-04-26 08:59:29.331005] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:12.247 [2024-04-26 08:59:29.331013] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.331017] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.331024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.247 [2024-04-26 08:59:29.331039] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69a60, cid 0, qid 0 00:25:12.247 [2024-04-26 08:59:29.331045] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69bc0, cid 1, qid 0 00:25:12.247 [2024-04-26 08:59:29.331050] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69d20, cid 2, qid 0 00:25:12.247 [2024-04-26 08:59:29.331056] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.247 [2024-04-26 08:59:29.331061] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fe0, cid 4, qid 0 00:25:12.247 [2024-04-26 08:59:29.331229] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.247 [2024-04-26 08:59:29.331236] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.247 [2024-04-26 08:59:29.331241] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.331246] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fe0) on tqpair=0xeffd80 00:25:12.247 [2024-04-26 08:59:29.331252] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:12.247 [2024-04-26 08:59:29.331258] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:12.247 [2024-04-26 08:59:29.331271] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:12.247 [2024-04-26 08:59:29.331281] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:12.247 [2024-04-26 08:59:29.331289] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.331294] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.331298] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.331306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:12.247 [2024-04-26 08:59:29.331319] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fe0, cid 4, qid 0 00:25:12.247 [2024-04-26 08:59:29.331468] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.247 [2024-04-26 08:59:29.331476] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.247 [2024-04-26 08:59:29.331480] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.331485] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fe0) on tqpair=0xeffd80 00:25:12.247 [2024-04-26 08:59:29.331530] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:12.247 [2024-04-26 08:59:29.331541] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:12.247 [2024-04-26 08:59:29.331551] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.331556] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeffd80) 00:25:12.247 [2024-04-26 08:59:29.331563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.247 [2024-04-26 08:59:29.331578] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fe0, cid 4, qid 0 00:25:12.247 [2024-04-26 08:59:29.331727] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.247 [2024-04-26 08:59:29.331735] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.247 [2024-04-26 08:59:29.331739] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.331744] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeffd80): datao=0, datal=4096, cccid=4 00:25:12.247 [2024-04-26 08:59:29.331750] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69fe0) on tqpair(0xeffd80): expected_datao=0, payload_size=4096 00:25:12.247 [2024-04-26 08:59:29.331756] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.331997] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.332002] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.247 [2024-04-26 08:59:29.376457] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.247 [2024-04-26 08:59:29.376467] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.248 [2024-04-26 08:59:29.376471] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.376476] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fe0) on tqpair=0xeffd80 00:25:12.248 [2024-04-26 08:59:29.376490] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:12.248 [2024-04-26 08:59:29.376502] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:12.248 [2024-04-26 08:59:29.376513] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:12.248 [2024-04-26 08:59:29.376521] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.376526] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeffd80) 00:25:12.248 [2024-04-26 08:59:29.376536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.248 [2024-04-26 08:59:29.376550] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fe0, cid 4, qid 0 00:25:12.248 [2024-04-26 08:59:29.376796] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.248 [2024-04-26 08:59:29.376804] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.248 [2024-04-26 08:59:29.376809] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.376813] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeffd80): datao=0, datal=4096, cccid=4 00:25:12.248 [2024-04-26 08:59:29.376819] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69fe0) on tqpair(0xeffd80): expected_datao=0, payload_size=4096 00:25:12.248 [2024-04-26 08:59:29.376825] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.377091] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.377096] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.417674] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.248 [2024-04-26 08:59:29.417689] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.248 [2024-04-26 08:59:29.417694] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.417699] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fe0) on tqpair=0xeffd80 00:25:12.248 [2024-04-26 08:59:29.417714] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:12.248 [2024-04-26 08:59:29.417726] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:12.248 [2024-04-26 08:59:29.417737] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.417742] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeffd80) 00:25:12.248 [2024-04-26 08:59:29.417750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.248 [2024-04-26 08:59:29.417765] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fe0, cid 4, qid 0 00:25:12.248 [2024-04-26 08:59:29.417915] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.248 [2024-04-26 08:59:29.417923] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.248 [2024-04-26 08:59:29.417928] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.417932] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeffd80): datao=0, datal=4096, cccid=4 00:25:12.248 [2024-04-26 08:59:29.417938] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69fe0) on tqpair(0xeffd80): expected_datao=0, payload_size=4096 00:25:12.248 [2024-04-26 08:59:29.417944] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.418210] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.418215] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.458862] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.248 [2024-04-26 08:59:29.458877] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.248 [2024-04-26 08:59:29.458882] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.458887] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fe0) on tqpair=0xeffd80 00:25:12.248 [2024-04-26 08:59:29.458897] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:12.248 [2024-04-26 08:59:29.458908] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:12.248 [2024-04-26 08:59:29.458923] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:12.248 [2024-04-26 08:59:29.458930] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:12.248 [2024-04-26 08:59:29.458937] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:12.248 [2024-04-26 08:59:29.458943] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:12.248 [2024-04-26 08:59:29.458949] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:12.248 [2024-04-26 08:59:29.458956] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:12.248 [2024-04-26 08:59:29.458970] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.458975] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeffd80) 00:25:12.248 [2024-04-26 08:59:29.458984] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.248 [2024-04-26 08:59:29.458991] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.458996] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.459001] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeffd80) 00:25:12.248 [2024-04-26 08:59:29.459007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.248 [2024-04-26 08:59:29.459024] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fe0, cid 4, qid 0 00:25:12.248 [2024-04-26 08:59:29.459030] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a140, cid 5, qid 0 00:25:12.248 [2024-04-26 08:59:29.459184] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.248 [2024-04-26 08:59:29.459193] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.248 [2024-04-26 08:59:29.459197] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.459202] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fe0) on tqpair=0xeffd80 00:25:12.248 [2024-04-26 08:59:29.459209] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.248 [2024-04-26 08:59:29.459216] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.248 [2024-04-26 08:59:29.459220] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.459225] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a140) on tqpair=0xeffd80 00:25:12.248 [2024-04-26 08:59:29.459236] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.459241] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeffd80) 00:25:12.248 [2024-04-26 08:59:29.459248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.248 [2024-04-26 08:59:29.459262] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a140, cid 5, qid 0 00:25:12.248 [2024-04-26 08:59:29.459402] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.248 [2024-04-26 08:59:29.459409] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.248 [2024-04-26 08:59:29.459414] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.459418] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a140) on tqpair=0xeffd80 00:25:12.248 [2024-04-26 08:59:29.459430] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.459434] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeffd80) 00:25:12.248 [2024-04-26 08:59:29.459441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.248 [2024-04-26 08:59:29.459464] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a140, cid 5, qid 0 00:25:12.248 [2024-04-26 08:59:29.459821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.248 [2024-04-26 08:59:29.459828] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.248 [2024-04-26 08:59:29.459832] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.459837] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a140) on tqpair=0xeffd80 00:25:12.248 [2024-04-26 08:59:29.459848] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.459852] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeffd80) 00:25:12.248 [2024-04-26 08:59:29.459859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.248 [2024-04-26 08:59:29.459870] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a140, cid 5, qid 0 00:25:12.248 [2024-04-26 08:59:29.460010] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.248 [2024-04-26 08:59:29.460018] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.248 [2024-04-26 08:59:29.460022] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.460027] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a140) on tqpair=0xeffd80 00:25:12.248 [2024-04-26 08:59:29.460041] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.460046] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xeffd80) 00:25:12.248 [2024-04-26 08:59:29.460053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.248 [2024-04-26 08:59:29.460061] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.460066] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xeffd80) 00:25:12.248 [2024-04-26 08:59:29.460072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.248 [2024-04-26 08:59:29.460080] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.460084] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xeffd80) 00:25:12.248 [2024-04-26 08:59:29.460091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.248 [2024-04-26 08:59:29.460099] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.460104] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xeffd80) 00:25:12.248 [2024-04-26 08:59:29.460110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.248 [2024-04-26 08:59:29.460124] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a140, cid 5, qid 0 00:25:12.248 [2024-04-26 08:59:29.460130] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69fe0, cid 4, qid 0 00:25:12.248 [2024-04-26 08:59:29.460135] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a2a0, cid 6, qid 0 00:25:12.248 [2024-04-26 08:59:29.460141] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a400, cid 7, qid 0 00:25:12.248 [2024-04-26 08:59:29.464463] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.248 [2024-04-26 08:59:29.464471] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.248 [2024-04-26 08:59:29.464475] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464480] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeffd80): datao=0, datal=8192, cccid=5 00:25:12.248 [2024-04-26 08:59:29.464488] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf6a140) on tqpair(0xeffd80): expected_datao=0, payload_size=8192 00:25:12.248 [2024-04-26 08:59:29.464494] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464502] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464506] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464513] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.248 [2024-04-26 08:59:29.464519] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.248 [2024-04-26 08:59:29.464523] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464528] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeffd80): datao=0, datal=512, cccid=4 00:25:12.248 [2024-04-26 08:59:29.464533] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf69fe0) on tqpair(0xeffd80): expected_datao=0, payload_size=512 00:25:12.248 [2024-04-26 08:59:29.464539] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464545] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464550] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464556] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.248 [2024-04-26 08:59:29.464562] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.248 [2024-04-26 08:59:29.464566] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464571] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeffd80): datao=0, datal=512, cccid=6 00:25:12.248 [2024-04-26 08:59:29.464577] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf6a2a0) on tqpair(0xeffd80): expected_datao=0, payload_size=512 00:25:12.248 [2024-04-26 08:59:29.464582] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464589] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464593] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.248 [2024-04-26 08:59:29.464599] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:12.248 [2024-04-26 08:59:29.464605] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:12.249 [2024-04-26 08:59:29.464610] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.464614] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xeffd80): datao=0, datal=4096, cccid=7 00:25:12.249 [2024-04-26 08:59:29.464620] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf6a400) on tqpair(0xeffd80): expected_datao=0, payload_size=4096 00:25:12.249 [2024-04-26 08:59:29.464626] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.464632] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.464637] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.464643] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.249 [2024-04-26 08:59:29.464649] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.249 [2024-04-26 08:59:29.464653] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.464658] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a140) on tqpair=0xeffd80 00:25:12.249 [2024-04-26 08:59:29.464671] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.249 [2024-04-26 08:59:29.464678] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.249 [2024-04-26 08:59:29.464682] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.464687] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69fe0) on tqpair=0xeffd80 00:25:12.249 [2024-04-26 08:59:29.464696] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.249 [2024-04-26 08:59:29.464703] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.249 [2024-04-26 08:59:29.464708] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.464713] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a2a0) on tqpair=0xeffd80 00:25:12.249 [2024-04-26 08:59:29.464721] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.249 [2024-04-26 08:59:29.464727] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.249 [2024-04-26 08:59:29.464731] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.464736] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a400) on tqpair=0xeffd80 00:25:12.249 ===================================================== 00:25:12.249 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.249 ===================================================== 00:25:12.249 Controller Capabilities/Features 00:25:12.249 ================================ 00:25:12.249 Vendor ID: 8086 00:25:12.249 Subsystem Vendor ID: 8086 00:25:12.249 Serial Number: SPDK00000000000001 00:25:12.249 Model Number: SPDK bdev Controller 00:25:12.249 Firmware Version: 24.05 00:25:12.249 Recommended Arb Burst: 6 00:25:12.249 IEEE OUI Identifier: e4 d2 5c 00:25:12.249 Multi-path I/O 00:25:12.249 May have multiple subsystem ports: Yes 00:25:12.249 May have multiple controllers: Yes 00:25:12.249 Associated with SR-IOV VF: No 00:25:12.249 Max Data Transfer Size: 131072 00:25:12.249 Max Number of Namespaces: 32 00:25:12.249 Max Number of I/O Queues: 127 00:25:12.249 NVMe Specification Version (VS): 1.3 00:25:12.249 NVMe Specification Version (Identify): 1.3 00:25:12.249 Maximum Queue Entries: 128 00:25:12.249 Contiguous Queues Required: Yes 00:25:12.249 Arbitration Mechanisms Supported 00:25:12.249 Weighted Round Robin: Not Supported 00:25:12.249 Vendor Specific: Not Supported 00:25:12.249 Reset Timeout: 15000 ms 00:25:12.249 Doorbell Stride: 4 bytes 00:25:12.249 NVM Subsystem Reset: Not Supported 00:25:12.249 Command Sets Supported 00:25:12.249 NVM Command Set: Supported 00:25:12.249 Boot Partition: Not Supported 00:25:12.249 Memory Page Size Minimum: 4096 bytes 00:25:12.249 Memory Page Size Maximum: 4096 bytes 00:25:12.249 Persistent Memory Region: Not Supported 00:25:12.249 Optional Asynchronous Events Supported 00:25:12.249 Namespace Attribute Notices: Supported 00:25:12.249 Firmware Activation Notices: Not Supported 00:25:12.249 ANA Change Notices: Not Supported 00:25:12.249 PLE Aggregate Log Change Notices: Not Supported 00:25:12.249 LBA Status Info Alert Notices: Not Supported 00:25:12.249 EGE Aggregate Log Change Notices: Not Supported 00:25:12.249 Normal NVM Subsystem Shutdown event: Not Supported 00:25:12.249 Zone Descriptor Change Notices: Not Supported 00:25:12.249 Discovery Log Change Notices: Not Supported 00:25:12.249 Controller Attributes 00:25:12.249 128-bit Host Identifier: Supported 00:25:12.249 Non-Operational Permissive Mode: Not Supported 00:25:12.249 NVM Sets: Not Supported 00:25:12.249 Read Recovery Levels: Not Supported 00:25:12.249 Endurance Groups: Not Supported 00:25:12.249 Predictable Latency Mode: Not Supported 00:25:12.249 Traffic Based Keep ALive: Not Supported 00:25:12.249 Namespace Granularity: Not Supported 00:25:12.249 SQ Associations: Not Supported 00:25:12.249 UUID List: Not Supported 00:25:12.249 Multi-Domain Subsystem: Not Supported 00:25:12.249 Fixed Capacity Management: Not Supported 00:25:12.249 Variable Capacity Management: Not Supported 00:25:12.249 Delete Endurance Group: Not Supported 00:25:12.249 Delete NVM Set: Not Supported 00:25:12.249 Extended LBA Formats Supported: Not Supported 00:25:12.249 Flexible Data Placement Supported: Not Supported 00:25:12.249 00:25:12.249 Controller Memory Buffer Support 00:25:12.249 ================================ 00:25:12.249 Supported: No 00:25:12.249 00:25:12.249 Persistent Memory Region Support 00:25:12.249 ================================ 00:25:12.249 Supported: No 00:25:12.249 00:25:12.249 Admin Command Set Attributes 00:25:12.249 ============================ 00:25:12.249 Security Send/Receive: Not Supported 00:25:12.249 Format NVM: Not Supported 00:25:12.249 Firmware Activate/Download: Not Supported 00:25:12.249 Namespace Management: Not Supported 00:25:12.249 Device Self-Test: Not Supported 00:25:12.249 Directives: Not Supported 00:25:12.249 NVMe-MI: Not Supported 00:25:12.249 Virtualization Management: Not Supported 00:25:12.249 Doorbell Buffer Config: Not Supported 00:25:12.249 Get LBA Status Capability: Not Supported 00:25:12.249 Command & Feature Lockdown Capability: Not Supported 00:25:12.249 Abort Command Limit: 4 00:25:12.249 Async Event Request Limit: 4 00:25:12.249 Number of Firmware Slots: N/A 00:25:12.249 Firmware Slot 1 Read-Only: N/A 00:25:12.249 Firmware Activation Without Reset: N/A 00:25:12.249 Multiple Update Detection Support: N/A 00:25:12.249 Firmware Update Granularity: No Information Provided 00:25:12.249 Per-Namespace SMART Log: No 00:25:12.249 Asymmetric Namespace Access Log Page: Not Supported 00:25:12.249 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:12.249 Command Effects Log Page: Supported 00:25:12.249 Get Log Page Extended Data: Supported 00:25:12.249 Telemetry Log Pages: Not Supported 00:25:12.249 Persistent Event Log Pages: Not Supported 00:25:12.249 Supported Log Pages Log Page: May Support 00:25:12.249 Commands Supported & Effects Log Page: Not Supported 00:25:12.249 Feature Identifiers & Effects Log Page:May Support 00:25:12.249 NVMe-MI Commands & Effects Log Page: May Support 00:25:12.249 Data Area 4 for Telemetry Log: Not Supported 00:25:12.249 Error Log Page Entries Supported: 128 00:25:12.249 Keep Alive: Supported 00:25:12.249 Keep Alive Granularity: 10000 ms 00:25:12.249 00:25:12.249 NVM Command Set Attributes 00:25:12.249 ========================== 00:25:12.249 Submission Queue Entry Size 00:25:12.249 Max: 64 00:25:12.249 Min: 64 00:25:12.249 Completion Queue Entry Size 00:25:12.249 Max: 16 00:25:12.249 Min: 16 00:25:12.249 Number of Namespaces: 32 00:25:12.249 Compare Command: Supported 00:25:12.249 Write Uncorrectable Command: Not Supported 00:25:12.249 Dataset Management Command: Supported 00:25:12.249 Write Zeroes Command: Supported 00:25:12.249 Set Features Save Field: Not Supported 00:25:12.249 Reservations: Supported 00:25:12.249 Timestamp: Not Supported 00:25:12.249 Copy: Supported 00:25:12.249 Volatile Write Cache: Present 00:25:12.249 Atomic Write Unit (Normal): 1 00:25:12.249 Atomic Write Unit (PFail): 1 00:25:12.249 Atomic Compare & Write Unit: 1 00:25:12.249 Fused Compare & Write: Supported 00:25:12.249 Scatter-Gather List 00:25:12.249 SGL Command Set: Supported 00:25:12.249 SGL Keyed: Supported 00:25:12.249 SGL Bit Bucket Descriptor: Not Supported 00:25:12.249 SGL Metadata Pointer: Not Supported 00:25:12.249 Oversized SGL: Not Supported 00:25:12.249 SGL Metadata Address: Not Supported 00:25:12.249 SGL Offset: Supported 00:25:12.249 Transport SGL Data Block: Not Supported 00:25:12.249 Replay Protected Memory Block: Not Supported 00:25:12.249 00:25:12.249 Firmware Slot Information 00:25:12.249 ========================= 00:25:12.249 Active slot: 1 00:25:12.249 Slot 1 Firmware Revision: 24.05 00:25:12.249 00:25:12.249 00:25:12.249 Commands Supported and Effects 00:25:12.249 ============================== 00:25:12.249 Admin Commands 00:25:12.249 -------------- 00:25:12.249 Get Log Page (02h): Supported 00:25:12.249 Identify (06h): Supported 00:25:12.249 Abort (08h): Supported 00:25:12.249 Set Features (09h): Supported 00:25:12.249 Get Features (0Ah): Supported 00:25:12.249 Asynchronous Event Request (0Ch): Supported 00:25:12.249 Keep Alive (18h): Supported 00:25:12.249 I/O Commands 00:25:12.249 ------------ 00:25:12.249 Flush (00h): Supported LBA-Change 00:25:12.249 Write (01h): Supported LBA-Change 00:25:12.249 Read (02h): Supported 00:25:12.249 Compare (05h): Supported 00:25:12.249 Write Zeroes (08h): Supported LBA-Change 00:25:12.249 Dataset Management (09h): Supported LBA-Change 00:25:12.249 Copy (19h): Supported LBA-Change 00:25:12.249 Unknown (79h): Supported LBA-Change 00:25:12.249 Unknown (7Ah): Supported 00:25:12.249 00:25:12.249 Error Log 00:25:12.249 ========= 00:25:12.249 00:25:12.249 Arbitration 00:25:12.249 =========== 00:25:12.249 Arbitration Burst: 1 00:25:12.249 00:25:12.249 Power Management 00:25:12.249 ================ 00:25:12.249 Number of Power States: 1 00:25:12.249 Current Power State: Power State #0 00:25:12.249 Power State #0: 00:25:12.249 Max Power: 0.00 W 00:25:12.249 Non-Operational State: Operational 00:25:12.249 Entry Latency: Not Reported 00:25:12.249 Exit Latency: Not Reported 00:25:12.249 Relative Read Throughput: 0 00:25:12.249 Relative Read Latency: 0 00:25:12.249 Relative Write Throughput: 0 00:25:12.249 Relative Write Latency: 0 00:25:12.249 Idle Power: Not Reported 00:25:12.249 Active Power: Not Reported 00:25:12.249 Non-Operational Permissive Mode: Not Supported 00:25:12.249 00:25:12.249 Health Information 00:25:12.249 ================== 00:25:12.249 Critical Warnings: 00:25:12.249 Available Spare Space: OK 00:25:12.249 Temperature: OK 00:25:12.249 Device Reliability: OK 00:25:12.249 Read Only: No 00:25:12.249 Volatile Memory Backup: OK 00:25:12.249 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:12.249 Temperature Threshold: [2024-04-26 08:59:29.464825] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.464831] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xeffd80) 00:25:12.249 [2024-04-26 08:59:29.464838] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.249 [2024-04-26 08:59:29.464852] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf6a400, cid 7, qid 0 00:25:12.249 [2024-04-26 08:59:29.465087] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.249 [2024-04-26 08:59:29.465096] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.249 [2024-04-26 08:59:29.465100] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.465105] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf6a400) on tqpair=0xeffd80 00:25:12.249 [2024-04-26 08:59:29.465136] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:12.249 [2024-04-26 08:59:29.465149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.249 [2024-04-26 08:59:29.465157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.249 [2024-04-26 08:59:29.465164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.249 [2024-04-26 08:59:29.465172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.249 [2024-04-26 08:59:29.465181] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.465186] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.465190] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.249 [2024-04-26 08:59:29.465198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.249 [2024-04-26 08:59:29.465213] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.249 [2024-04-26 08:59:29.465361] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.249 [2024-04-26 08:59:29.465369] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.249 [2024-04-26 08:59:29.465373] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.465378] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.249 [2024-04-26 08:59:29.465386] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.249 [2024-04-26 08:59:29.465390] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.465395] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.465402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.465419] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.465576] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.465584] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.465592] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.465597] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.465603] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:12.250 [2024-04-26 08:59:29.465609] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:12.250 [2024-04-26 08:59:29.465620] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.465625] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.465630] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.465637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.465651] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.465794] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.465802] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.465806] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.465811] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.465822] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.465827] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.465831] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.465839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.465851] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.466203] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.466210] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.466214] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.466219] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.466229] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.466235] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.466239] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.466246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.466257] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.466402] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.466409] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.466414] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.466418] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.466429] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.466434] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.466439] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.466446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.466465] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.466813] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.466820] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.466824] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.466829] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.466839] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.466844] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.466849] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.466856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.466867] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.467006] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.467013] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.467018] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467022] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.467034] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467038] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467043] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.467050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.467063] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.467203] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.467211] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.467215] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467220] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.467230] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467235] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467240] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.467247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.467259] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.467403] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.467411] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.467415] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467420] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.467431] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467435] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467440] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.467447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.467465] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.467607] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.467614] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.467621] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467626] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.467638] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467642] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467647] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.467654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.467667] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.467810] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.467818] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.467822] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467827] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.467838] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467843] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.467848] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.467855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.467867] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.468010] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.468017] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.468022] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.468026] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.468037] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.468042] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.468046] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.468053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.468066] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.468211] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.468218] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.468223] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.468227] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.468238] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.468243] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.468248] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.468255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.468267] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.468407] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.468415] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.468419] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.468426] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.468438] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.468443] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.468447] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xeffd80) 00:25:12.250 [2024-04-26 08:59:29.472464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.250 [2024-04-26 08:59:29.472480] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf69e80, cid 3, qid 0 00:25:12.250 [2024-04-26 08:59:29.472665] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:12.250 [2024-04-26 08:59:29.472673] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:12.250 [2024-04-26 08:59:29.472677] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:12.250 [2024-04-26 08:59:29.472682] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf69e80) on tqpair=0xeffd80 00:25:12.250 [2024-04-26 08:59:29.472692] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:25:12.250 0 Kelvin (-273 Celsius) 00:25:12.250 Available Spare: 0% 00:25:12.250 Available Spare Threshold: 0% 00:25:12.250 Life Percentage Used: 0% 00:25:12.250 Data Units Read: 0 00:25:12.250 Data Units Written: 0 00:25:12.250 Host Read Commands: 0 00:25:12.250 Host Write Commands: 0 00:25:12.250 Controller Busy Time: 0 minutes 00:25:12.250 Power Cycles: 0 00:25:12.250 Power On Hours: 0 hours 00:25:12.250 Unsafe Shutdowns: 0 00:25:12.250 Unrecoverable Media Errors: 0 00:25:12.250 Lifetime Error Log Entries: 0 00:25:12.250 Warning Temperature Time: 0 minutes 00:25:12.250 Critical Temperature Time: 0 minutes 00:25:12.250 00:25:12.250 Number of Queues 00:25:12.250 ================ 00:25:12.250 Number of I/O Submission Queues: 127 00:25:12.250 Number of I/O Completion Queues: 127 00:25:12.250 00:25:12.250 Active Namespaces 00:25:12.250 ================= 00:25:12.250 Namespace ID:1 00:25:12.250 Error Recovery Timeout: Unlimited 00:25:12.250 Command Set Identifier: NVM (00h) 00:25:12.250 Deallocate: Supported 00:25:12.250 Deallocated/Unwritten Error: Not Supported 00:25:12.250 Deallocated Read Value: Unknown 00:25:12.250 Deallocate in Write Zeroes: Not Supported 00:25:12.250 Deallocated Guard Field: 0xFFFF 00:25:12.250 Flush: Supported 00:25:12.250 Reservation: Supported 00:25:12.250 Namespace Sharing Capabilities: Multiple Controllers 00:25:12.250 Size (in LBAs): 131072 (0GiB) 00:25:12.250 Capacity (in LBAs): 131072 (0GiB) 00:25:12.250 Utilization (in LBAs): 131072 (0GiB) 00:25:12.250 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:12.250 EUI64: ABCDEF0123456789 00:25:12.250 UUID: 2f0a67b6-c4d9-45ac-8103-694668acb716 00:25:12.250 Thin Provisioning: Not Supported 00:25:12.250 Per-NS Atomic Units: Yes 00:25:12.250 Atomic Boundary Size (Normal): 0 00:25:12.250 Atomic Boundary Size (PFail): 0 00:25:12.250 Atomic Boundary Offset: 0 00:25:12.250 Maximum Single Source Range Length: 65535 00:25:12.250 Maximum Copy Length: 65535 00:25:12.250 Maximum Source Range Count: 1 00:25:12.250 NGUID/EUI64 Never Reused: No 00:25:12.250 Namespace Write Protected: No 00:25:12.250 Number of LBA Formats: 1 00:25:12.250 Current LBA Format: LBA Format #00 00:25:12.250 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:12.250 00:25:12.250 08:59:29 -- host/identify.sh@51 -- # sync 00:25:12.511 08:59:29 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.511 08:59:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:12.511 08:59:29 -- common/autotest_common.sh@10 -- # set +x 00:25:12.511 08:59:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:12.511 08:59:29 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:12.511 08:59:29 -- host/identify.sh@56 -- # nvmftestfini 00:25:12.511 08:59:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:12.511 08:59:29 -- nvmf/common.sh@117 -- # sync 00:25:12.511 08:59:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:12.511 08:59:29 -- nvmf/common.sh@120 -- # set +e 00:25:12.511 08:59:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:12.511 08:59:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:12.511 rmmod nvme_tcp 00:25:12.511 rmmod nvme_fabrics 00:25:12.511 rmmod nvme_keyring 00:25:12.511 08:59:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:12.511 08:59:29 -- nvmf/common.sh@124 -- # set -e 00:25:12.511 08:59:29 -- nvmf/common.sh@125 -- # return 0 00:25:12.511 08:59:29 -- nvmf/common.sh@478 -- # '[' -n 2163110 ']' 00:25:12.511 08:59:29 -- nvmf/common.sh@479 -- # killprocess 2163110 00:25:12.511 08:59:29 -- common/autotest_common.sh@936 -- # '[' -z 2163110 ']' 00:25:12.511 08:59:29 -- common/autotest_common.sh@940 -- # kill -0 2163110 00:25:12.511 08:59:29 -- common/autotest_common.sh@941 -- # uname 00:25:12.511 08:59:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:12.511 08:59:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2163110 00:25:12.511 08:59:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:12.511 08:59:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:12.511 08:59:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2163110' 00:25:12.511 killing process with pid 2163110 00:25:12.511 08:59:29 -- common/autotest_common.sh@955 -- # kill 2163110 00:25:12.511 [2024-04-26 08:59:29.617273] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:12.511 08:59:29 -- common/autotest_common.sh@960 -- # wait 2163110 00:25:12.769 08:59:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:12.769 08:59:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:12.769 08:59:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:12.769 08:59:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.769 08:59:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.769 08:59:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.769 08:59:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.769 08:59:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.301 08:59:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:15.301 00:25:15.301 real 0m10.989s 00:25:15.301 user 0m8.333s 00:25:15.301 sys 0m5.811s 00:25:15.301 08:59:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:15.301 08:59:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.301 ************************************ 00:25:15.301 END TEST nvmf_identify 00:25:15.301 ************************************ 00:25:15.301 08:59:31 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:15.301 08:59:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:15.301 08:59:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:15.301 08:59:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.301 ************************************ 00:25:15.301 START TEST nvmf_perf 00:25:15.301 ************************************ 00:25:15.301 08:59:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:15.301 * Looking for test storage... 00:25:15.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.301 08:59:32 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.301 08:59:32 -- nvmf/common.sh@7 -- # uname -s 00:25:15.301 08:59:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.301 08:59:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.301 08:59:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.301 08:59:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.301 08:59:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.301 08:59:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.301 08:59:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.301 08:59:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.301 08:59:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.301 08:59:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.301 08:59:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:15.301 08:59:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:15.301 08:59:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.301 08:59:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.301 08:59:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.301 08:59:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.301 08:59:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.301 08:59:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.301 08:59:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.301 08:59:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.301 08:59:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.301 08:59:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.301 08:59:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.301 08:59:32 -- paths/export.sh@5 -- # export PATH 00:25:15.301 08:59:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.301 08:59:32 -- nvmf/common.sh@47 -- # : 0 00:25:15.301 08:59:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:15.301 08:59:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:15.301 08:59:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.301 08:59:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.301 08:59:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.301 08:59:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:15.301 08:59:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:15.301 08:59:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:15.301 08:59:32 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:15.301 08:59:32 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:15.301 08:59:32 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:15.301 08:59:32 -- host/perf.sh@17 -- # nvmftestinit 00:25:15.301 08:59:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:15.301 08:59:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.301 08:59:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:15.301 08:59:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:15.302 08:59:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:15.302 08:59:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.302 08:59:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.302 08:59:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.302 08:59:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:15.302 08:59:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:15.302 08:59:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:15.302 08:59:32 -- common/autotest_common.sh@10 -- # set +x 00:25:21.864 08:59:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:21.864 08:59:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:21.864 08:59:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:21.864 08:59:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:21.864 08:59:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:21.864 08:59:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:21.864 08:59:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:21.864 08:59:38 -- nvmf/common.sh@295 -- # net_devs=() 00:25:21.864 08:59:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:21.864 08:59:38 -- nvmf/common.sh@296 -- # e810=() 00:25:21.864 08:59:38 -- nvmf/common.sh@296 -- # local -ga e810 00:25:21.864 08:59:38 -- nvmf/common.sh@297 -- # x722=() 00:25:21.864 08:59:38 -- nvmf/common.sh@297 -- # local -ga x722 00:25:21.864 08:59:38 -- nvmf/common.sh@298 -- # mlx=() 00:25:21.864 08:59:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:21.864 08:59:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.864 08:59:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.865 08:59:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.865 08:59:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.865 08:59:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.865 08:59:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.865 08:59:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.865 08:59:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.865 08:59:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.865 08:59:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.865 08:59:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.865 08:59:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:21.865 08:59:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:21.865 08:59:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:21.865 08:59:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.865 08:59:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:21.865 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:21.865 08:59:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.865 08:59:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:21.865 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:21.865 08:59:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:21.865 08:59:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.865 08:59:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.865 08:59:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:21.865 08:59:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.865 08:59:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:21.865 Found net devices under 0000:af:00.0: cvl_0_0 00:25:21.865 08:59:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.865 08:59:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.865 08:59:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.865 08:59:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:21.865 08:59:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.865 08:59:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:21.865 Found net devices under 0000:af:00.1: cvl_0_1 00:25:21.865 08:59:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.865 08:59:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:21.865 08:59:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:21.865 08:59:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:21.865 08:59:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.865 08:59:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.865 08:59:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.865 08:59:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:21.865 08:59:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.865 08:59:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.865 08:59:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:21.865 08:59:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.865 08:59:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.865 08:59:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:21.865 08:59:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:21.865 08:59:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.865 08:59:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.865 08:59:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.865 08:59:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.865 08:59:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:21.865 08:59:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.865 08:59:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.865 08:59:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.865 08:59:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:21.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:25:21.865 00:25:21.865 --- 10.0.0.2 ping statistics --- 00:25:21.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.865 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:25:21.865 08:59:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:25:21.865 00:25:21.865 --- 10.0.0.1 ping statistics --- 00:25:21.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.865 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:25:21.865 08:59:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.865 08:59:38 -- nvmf/common.sh@411 -- # return 0 00:25:21.865 08:59:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:21.865 08:59:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.865 08:59:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:21.865 08:59:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.865 08:59:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:21.865 08:59:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:21.865 08:59:38 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:21.865 08:59:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:21.865 08:59:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:21.865 08:59:38 -- common/autotest_common.sh@10 -- # set +x 00:25:21.865 08:59:38 -- nvmf/common.sh@470 -- # nvmfpid=2167094 00:25:21.865 08:59:38 -- nvmf/common.sh@471 -- # waitforlisten 2167094 00:25:21.865 08:59:38 -- common/autotest_common.sh@817 -- # '[' -z 2167094 ']' 00:25:21.865 08:59:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.865 08:59:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:21.865 08:59:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.866 08:59:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:21.866 08:59:38 -- common/autotest_common.sh@10 -- # set +x 00:25:21.866 08:59:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.866 [2024-04-26 08:59:38.720710] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:25:21.866 [2024-04-26 08:59:38.720756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.866 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.866 [2024-04-26 08:59:38.795885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.866 [2024-04-26 08:59:38.867661] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.866 [2024-04-26 08:59:38.867698] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.866 [2024-04-26 08:59:38.867707] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.866 [2024-04-26 08:59:38.867715] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.866 [2024-04-26 08:59:38.867738] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.866 [2024-04-26 08:59:38.867780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.866 [2024-04-26 08:59:38.867990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.866 [2024-04-26 08:59:38.868053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.866 [2024-04-26 08:59:38.868054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.433 08:59:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:22.433 08:59:39 -- common/autotest_common.sh@850 -- # return 0 00:25:22.433 08:59:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:22.433 08:59:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:22.433 08:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:22.433 08:59:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.433 08:59:39 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:22.433 08:59:39 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:25.734 08:59:42 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:25.734 08:59:42 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:25.734 08:59:42 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:25:25.734 08:59:42 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:26.007 08:59:42 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:26.007 08:59:42 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:25:26.007 08:59:42 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:26.007 08:59:42 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:26.007 08:59:42 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:26.007 [2024-04-26 08:59:43.148838] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.007 08:59:43 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:26.265 08:59:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:26.265 08:59:43 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:26.523 08:59:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:26.523 08:59:43 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:26.523 08:59:43 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.781 [2024-04-26 08:59:43.887636] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.781 08:59:43 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:27.040 08:59:44 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:25:27.040 08:59:44 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:27.040 08:59:44 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:27.040 08:59:44 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:28.415 Initializing NVMe Controllers 00:25:28.415 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:25:28.415 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:25:28.415 Initialization complete. Launching workers. 00:25:28.415 ======================================================== 00:25:28.415 Latency(us) 00:25:28.415 Device Information : IOPS MiB/s Average min max 00:25:28.415 PCIE (0000:d8:00.0) NSID 1 from core 0: 103488.62 404.25 308.76 33.84 5215.88 00:25:28.415 ======================================================== 00:25:28.415 Total : 103488.62 404.25 308.76 33.84 5215.88 00:25:28.415 00:25:28.415 08:59:45 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:28.415 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.791 Initializing NVMe Controllers 00:25:29.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:29.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:29.791 Initialization complete. Launching workers. 00:25:29.791 ======================================================== 00:25:29.791 Latency(us) 00:25:29.791 Device Information : IOPS MiB/s Average min max 00:25:29.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 0.31 12724.98 584.02 46393.84 00:25:29.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17991.32 7802.44 50860.54 00:25:29.791 ======================================================== 00:25:29.791 Total : 136.00 0.53 14893.47 584.02 50860.54 00:25:29.791 00:25:29.791 08:59:46 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:29.791 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.725 Initializing NVMe Controllers 00:25:30.725 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:30.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:30.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:30.725 Initialization complete. Launching workers. 00:25:30.725 ======================================================== 00:25:30.725 Latency(us) 00:25:30.725 Device Information : IOPS MiB/s Average min max 00:25:30.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8222.78 32.12 3899.13 756.14 10151.48 00:25:30.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3866.48 15.10 8334.25 4247.31 15976.76 00:25:30.725 ======================================================== 00:25:30.725 Total : 12089.26 47.22 5317.61 756.14 15976.76 00:25:30.725 00:25:30.725 08:59:47 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:30.725 08:59:47 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:30.725 08:59:47 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:30.725 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.257 Initializing NVMe Controllers 00:25:33.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.257 Controller IO queue size 128, less than required. 00:25:33.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.257 Controller IO queue size 128, less than required. 00:25:33.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:33.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:33.257 Initialization complete. Launching workers. 00:25:33.257 ======================================================== 00:25:33.257 Latency(us) 00:25:33.257 Device Information : IOPS MiB/s Average min max 00:25:33.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 862.50 215.63 153555.94 80789.22 240575.05 00:25:33.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 587.16 146.79 232376.17 77616.20 368900.42 00:25:33.257 ======================================================== 00:25:33.257 Total : 1449.66 362.42 185480.72 77616.20 368900.42 00:25:33.257 00:25:33.257 08:59:50 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:33.257 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.516 No valid NVMe controllers or AIO or URING devices found 00:25:33.516 Initializing NVMe Controllers 00:25:33.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.516 Controller IO queue size 128, less than required. 00:25:33.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.516 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:33.516 Controller IO queue size 128, less than required. 00:25:33.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.516 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:33.516 WARNING: Some requested NVMe devices were skipped 00:25:33.516 08:59:50 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:33.516 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.050 Initializing NVMe Controllers 00:25:36.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:36.050 Controller IO queue size 128, less than required. 00:25:36.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:36.050 Controller IO queue size 128, less than required. 00:25:36.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:36.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:36.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:36.050 Initialization complete. Launching workers. 00:25:36.050 00:25:36.050 ==================== 00:25:36.050 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:36.050 TCP transport: 00:25:36.050 polls: 55309 00:25:36.050 idle_polls: 19107 00:25:36.050 sock_completions: 36202 00:25:36.050 nvme_completions: 3347 00:25:36.050 submitted_requests: 4998 00:25:36.050 queued_requests: 1 00:25:36.050 00:25:36.050 ==================== 00:25:36.050 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:36.050 TCP transport: 00:25:36.050 polls: 53152 00:25:36.050 idle_polls: 14570 00:25:36.050 sock_completions: 38582 00:25:36.050 nvme_completions: 3489 00:25:36.050 submitted_requests: 5248 00:25:36.050 queued_requests: 1 00:25:36.050 ======================================================== 00:25:36.050 Latency(us) 00:25:36.050 Device Information : IOPS MiB/s Average min max 00:25:36.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 836.50 209.12 158605.10 86410.90 237435.64 00:25:36.050 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 872.00 218.00 152164.40 56115.05 219717.93 00:25:36.050 ======================================================== 00:25:36.050 Total : 1708.49 427.12 155317.84 56115.05 237435.64 00:25:36.050 00:25:36.050 08:59:53 -- host/perf.sh@66 -- # sync 00:25:36.050 08:59:53 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:36.050 08:59:53 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:36.050 08:59:53 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:36.050 08:59:53 -- host/perf.sh@114 -- # nvmftestfini 00:25:36.050 08:59:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:36.050 08:59:53 -- nvmf/common.sh@117 -- # sync 00:25:36.050 08:59:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:36.050 08:59:53 -- nvmf/common.sh@120 -- # set +e 00:25:36.050 08:59:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:36.050 08:59:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:36.050 rmmod nvme_tcp 00:25:36.050 rmmod nvme_fabrics 00:25:36.050 rmmod nvme_keyring 00:25:36.050 08:59:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:36.050 08:59:53 -- nvmf/common.sh@124 -- # set -e 00:25:36.050 08:59:53 -- nvmf/common.sh@125 -- # return 0 00:25:36.050 08:59:53 -- nvmf/common.sh@478 -- # '[' -n 2167094 ']' 00:25:36.050 08:59:53 -- nvmf/common.sh@479 -- # killprocess 2167094 00:25:36.050 08:59:53 -- common/autotest_common.sh@936 -- # '[' -z 2167094 ']' 00:25:36.050 08:59:53 -- common/autotest_common.sh@940 -- # kill -0 2167094 00:25:36.050 08:59:53 -- common/autotest_common.sh@941 -- # uname 00:25:36.050 08:59:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:36.050 08:59:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2167094 00:25:36.309 08:59:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:36.309 08:59:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:36.309 08:59:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2167094' 00:25:36.309 killing process with pid 2167094 00:25:36.309 08:59:53 -- common/autotest_common.sh@955 -- # kill 2167094 00:25:36.309 08:59:53 -- common/autotest_common.sh@960 -- # wait 2167094 00:25:38.212 08:59:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:38.212 08:59:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:38.212 08:59:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:38.212 08:59:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:38.212 08:59:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:38.212 08:59:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.212 08:59:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.212 08:59:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.769 08:59:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:40.769 00:25:40.769 real 0m25.370s 00:25:40.769 user 1m6.372s 00:25:40.769 sys 0m8.073s 00:25:40.769 08:59:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:40.769 08:59:57 -- common/autotest_common.sh@10 -- # set +x 00:25:40.769 ************************************ 00:25:40.769 END TEST nvmf_perf 00:25:40.769 ************************************ 00:25:40.769 08:59:57 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:40.769 08:59:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:40.769 08:59:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:40.769 08:59:57 -- common/autotest_common.sh@10 -- # set +x 00:25:40.769 ************************************ 00:25:40.769 START TEST nvmf_fio_host 00:25:40.769 ************************************ 00:25:40.769 08:59:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:40.769 * Looking for test storage... 00:25:40.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.769 08:59:57 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.769 08:59:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.769 08:59:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.769 08:59:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.769 08:59:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.769 08:59:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.769 08:59:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.769 08:59:57 -- paths/export.sh@5 -- # export PATH 00:25:40.769 08:59:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.769 08:59:57 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.769 08:59:57 -- nvmf/common.sh@7 -- # uname -s 00:25:40.769 08:59:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.769 08:59:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.769 08:59:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.769 08:59:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.769 08:59:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.769 08:59:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.769 08:59:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.769 08:59:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.770 08:59:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.770 08:59:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.770 08:59:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:40.770 08:59:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:40.770 08:59:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.770 08:59:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.770 08:59:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.770 08:59:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.770 08:59:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.770 08:59:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.770 08:59:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.770 08:59:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.770 08:59:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.770 08:59:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.770 08:59:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.770 08:59:57 -- paths/export.sh@5 -- # export PATH 00:25:40.770 08:59:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.770 08:59:57 -- nvmf/common.sh@47 -- # : 0 00:25:40.770 08:59:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.770 08:59:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.770 08:59:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.770 08:59:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.770 08:59:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.770 08:59:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.770 08:59:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.770 08:59:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.770 08:59:57 -- host/fio.sh@12 -- # nvmftestinit 00:25:40.770 08:59:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:40.770 08:59:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.770 08:59:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:40.770 08:59:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:40.770 08:59:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:40.770 08:59:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.770 08:59:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.770 08:59:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.770 08:59:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:40.770 08:59:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:40.770 08:59:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:40.770 08:59:57 -- common/autotest_common.sh@10 -- # set +x 00:25:47.332 09:00:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:47.332 09:00:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:47.332 09:00:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:47.332 09:00:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:47.332 09:00:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:47.332 09:00:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:47.332 09:00:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:47.332 09:00:04 -- nvmf/common.sh@295 -- # net_devs=() 00:25:47.332 09:00:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:47.332 09:00:04 -- nvmf/common.sh@296 -- # e810=() 00:25:47.332 09:00:04 -- nvmf/common.sh@296 -- # local -ga e810 00:25:47.332 09:00:04 -- nvmf/common.sh@297 -- # x722=() 00:25:47.332 09:00:04 -- nvmf/common.sh@297 -- # local -ga x722 00:25:47.332 09:00:04 -- nvmf/common.sh@298 -- # mlx=() 00:25:47.332 09:00:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:47.332 09:00:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.332 09:00:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.332 09:00:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.332 09:00:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.332 09:00:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.332 09:00:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.332 09:00:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.332 09:00:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.332 09:00:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.332 09:00:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.332 09:00:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.332 09:00:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:47.332 09:00:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:47.332 09:00:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:47.332 09:00:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.332 09:00:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:47.332 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:47.332 09:00:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.332 09:00:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:47.332 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:47.332 09:00:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:47.332 09:00:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.332 09:00:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.332 09:00:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:47.332 09:00:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.332 09:00:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:47.332 Found net devices under 0000:af:00.0: cvl_0_0 00:25:47.332 09:00:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.332 09:00:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.332 09:00:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.332 09:00:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:47.332 09:00:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.332 09:00:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:47.332 Found net devices under 0000:af:00.1: cvl_0_1 00:25:47.332 09:00:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.332 09:00:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:47.332 09:00:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:47.332 09:00:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:47.332 09:00:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:47.332 09:00:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.332 09:00:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.332 09:00:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.332 09:00:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:47.332 09:00:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.332 09:00:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.332 09:00:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:47.332 09:00:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.332 09:00:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.333 09:00:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:47.333 09:00:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:47.333 09:00:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.333 09:00:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.591 09:00:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.591 09:00:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.591 09:00:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:47.591 09:00:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.591 09:00:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.591 09:00:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.591 09:00:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:47.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:25:47.850 00:25:47.850 --- 10.0.0.2 ping statistics --- 00:25:47.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.850 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:25:47.850 09:00:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:25:47.850 00:25:47.850 --- 10.0.0.1 ping statistics --- 00:25:47.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.850 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:47.850 09:00:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.850 09:00:04 -- nvmf/common.sh@411 -- # return 0 00:25:47.850 09:00:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:47.850 09:00:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.850 09:00:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:47.850 09:00:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:47.850 09:00:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.850 09:00:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:47.850 09:00:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:47.850 09:00:04 -- host/fio.sh@14 -- # [[ y != y ]] 00:25:47.850 09:00:04 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:25:47.850 09:00:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:47.850 09:00:04 -- common/autotest_common.sh@10 -- # set +x 00:25:47.850 09:00:04 -- host/fio.sh@22 -- # nvmfpid=2174105 00:25:47.850 09:00:04 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:47.850 09:00:04 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:47.850 09:00:04 -- host/fio.sh@26 -- # waitforlisten 2174105 00:25:47.850 09:00:04 -- common/autotest_common.sh@817 -- # '[' -z 2174105 ']' 00:25:47.850 09:00:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.850 09:00:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:47.850 09:00:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.850 09:00:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:47.850 09:00:04 -- common/autotest_common.sh@10 -- # set +x 00:25:47.850 [2024-04-26 09:00:04.932971] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:25:47.850 [2024-04-26 09:00:04.933020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.850 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.850 [2024-04-26 09:00:05.006915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:47.850 [2024-04-26 09:00:05.079719] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.850 [2024-04-26 09:00:05.079755] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.850 [2024-04-26 09:00:05.079765] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.850 [2024-04-26 09:00:05.079774] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.850 [2024-04-26 09:00:05.079782] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.850 [2024-04-26 09:00:05.079822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.850 [2024-04-26 09:00:05.079918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.850 [2024-04-26 09:00:05.080003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.850 [2024-04-26 09:00:05.080005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.785 09:00:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:48.785 09:00:05 -- common/autotest_common.sh@850 -- # return 0 00:25:48.785 09:00:05 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:48.785 09:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.785 09:00:05 -- common/autotest_common.sh@10 -- # set +x 00:25:48.785 [2024-04-26 09:00:05.733148] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.785 09:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.785 09:00:05 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:25:48.785 09:00:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:48.785 09:00:05 -- common/autotest_common.sh@10 -- # set +x 00:25:48.785 09:00:05 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:48.785 09:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.785 09:00:05 -- common/autotest_common.sh@10 -- # set +x 00:25:48.785 Malloc1 00:25:48.785 09:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.785 09:00:05 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:48.785 09:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.785 09:00:05 -- common/autotest_common.sh@10 -- # set +x 00:25:48.785 09:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.785 09:00:05 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:48.785 09:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.785 09:00:05 -- common/autotest_common.sh@10 -- # set +x 00:25:48.785 09:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.785 09:00:05 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.785 09:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.785 09:00:05 -- common/autotest_common.sh@10 -- # set +x 00:25:48.785 [2024-04-26 09:00:05.831887] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.785 09:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.785 09:00:05 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:48.785 09:00:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.785 09:00:05 -- common/autotest_common.sh@10 -- # set +x 00:25:48.785 09:00:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.785 09:00:05 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:48.785 09:00:05 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:48.785 09:00:05 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:48.785 09:00:05 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:48.785 09:00:05 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:48.785 09:00:05 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:48.785 09:00:05 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:48.785 09:00:05 -- common/autotest_common.sh@1327 -- # shift 00:25:48.785 09:00:05 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:48.785 09:00:05 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.785 09:00:05 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:48.785 09:00:05 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:48.785 09:00:05 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:48.785 09:00:05 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:48.785 09:00:05 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:48.785 09:00:05 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.785 09:00:05 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:48.785 09:00:05 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:48.785 09:00:05 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:48.785 09:00:05 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:48.785 09:00:05 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:48.785 09:00:05 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:48.785 09:00:05 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:49.043 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:49.043 fio-3.35 00:25:49.043 Starting 1 thread 00:25:49.043 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.576 00:25:51.576 test: (groupid=0, jobs=1): err= 0: pid=2174686: Fri Apr 26 09:00:08 2024 00:25:51.576 read: IOPS=11.2k, BW=43.7MiB/s (45.8MB/s)(87.5MiB/2004msec) 00:25:51.576 slat (nsec): min=1486, max=243935, avg=1662.42, stdev=2279.14 00:25:51.576 clat (usec): min=3156, max=18264, avg=6667.42, stdev=1754.48 00:25:51.576 lat (usec): min=3157, max=18274, avg=6669.09, stdev=1754.69 00:25:51.576 clat percentiles (usec): 00:25:51.576 | 1.00th=[ 4359], 5.00th=[ 4948], 10.00th=[ 5211], 20.00th=[ 5604], 00:25:51.576 | 30.00th=[ 5800], 40.00th=[ 5997], 50.00th=[ 6194], 60.00th=[ 6390], 00:25:51.576 | 70.00th=[ 6718], 80.00th=[ 7308], 90.00th=[ 8979], 95.00th=[10552], 00:25:51.576 | 99.00th=[13566], 99.50th=[14615], 99.90th=[17171], 99.95th=[17171], 00:25:51.576 | 99.99th=[17957] 00:25:51.576 bw ( KiB/s): min=42576, max=45808, per=99.84%, avg=44660.00, stdev=1450.77, samples=4 00:25:51.576 iops : min=10644, max=11452, avg=11165.00, stdev=362.69, samples=4 00:25:51.576 write: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(87.2MiB/2004msec); 0 zone resets 00:25:51.576 slat (nsec): min=1542, max=247116, avg=1738.58, stdev=1818.55 00:25:51.576 clat (usec): min=2098, max=16902, avg=4725.04, stdev=1041.44 00:25:51.576 lat (usec): min=2099, max=16917, avg=4726.78, stdev=1041.83 00:25:51.576 clat percentiles (usec): 00:25:51.576 | 1.00th=[ 2802], 5.00th=[ 3294], 10.00th=[ 3621], 20.00th=[ 4047], 00:25:51.576 | 30.00th=[ 4293], 40.00th=[ 4490], 50.00th=[ 4686], 60.00th=[ 4817], 00:25:51.576 | 70.00th=[ 5014], 80.00th=[ 5211], 90.00th=[ 5669], 95.00th=[ 6325], 00:25:51.576 | 99.00th=[ 8160], 99.50th=[ 9765], 99.90th=[16581], 99.95th=[16712], 00:25:51.576 | 99.99th=[16909] 00:25:51.576 bw ( KiB/s): min=42960, max=45472, per=99.99%, avg=44556.00, stdev=1098.80, samples=4 00:25:51.576 iops : min=10740, max=11368, avg=11139.00, stdev=274.70, samples=4 00:25:51.576 lat (msec) : 4=9.63%, 10=87.03%, 20=3.34% 00:25:51.576 cpu : usr=64.00%, sys=29.16%, ctx=41, majf=0, minf=4 00:25:51.576 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:51.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:51.576 issued rwts: total=22411,22324,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.576 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:51.576 00:25:51.576 Run status group 0 (all jobs): 00:25:51.576 READ: bw=43.7MiB/s (45.8MB/s), 43.7MiB/s-43.7MiB/s (45.8MB/s-45.8MB/s), io=87.5MiB (91.8MB), run=2004-2004msec 00:25:51.576 WRITE: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=87.2MiB (91.4MB), run=2004-2004msec 00:25:51.576 09:00:08 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:51.576 09:00:08 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:51.576 09:00:08 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:51.576 09:00:08 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:51.576 09:00:08 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:51.576 09:00:08 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.576 09:00:08 -- common/autotest_common.sh@1327 -- # shift 00:25:51.576 09:00:08 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:51.576 09:00:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.576 09:00:08 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.576 09:00:08 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:51.576 09:00:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:51.576 09:00:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:51.576 09:00:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:51.576 09:00:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.576 09:00:08 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.576 09:00:08 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:51.576 09:00:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:51.576 09:00:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:51.576 09:00:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:51.576 09:00:08 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:51.576 09:00:08 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:51.842 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:51.842 fio-3.35 00:25:51.842 Starting 1 thread 00:25:51.842 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.422 00:25:54.423 test: (groupid=0, jobs=1): err= 0: pid=2175153: Fri Apr 26 09:00:11 2024 00:25:54.423 read: IOPS=9135, BW=143MiB/s (150MB/s)(286MiB/2005msec) 00:25:54.423 slat (usec): min=2, max=252, avg= 2.76, stdev= 2.96 00:25:54.423 clat (usec): min=2332, max=51843, avg=8674.17, stdev=4948.83 00:25:54.423 lat (usec): min=2334, max=51845, avg=8676.93, stdev=4949.27 00:25:54.423 clat percentiles (usec): 00:25:54.423 | 1.00th=[ 3982], 5.00th=[ 4817], 10.00th=[ 5342], 20.00th=[ 6128], 00:25:54.423 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7701], 60.00th=[ 8291], 00:25:54.423 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[11338], 95.00th=[15926], 00:25:54.423 | 99.00th=[28181], 99.50th=[46400], 99.90th=[50070], 99.95th=[51119], 00:25:54.423 | 99.99th=[51643] 00:25:54.423 bw ( KiB/s): min=57440, max=86464, per=49.50%, avg=72352.00, stdev=11943.19, samples=4 00:25:54.423 iops : min= 3590, max= 5404, avg=4522.00, stdev=746.45, samples=4 00:25:54.423 write: IOPS=5429, BW=84.8MiB/s (89.0MB/s)(148MiB/1743msec); 0 zone resets 00:25:54.423 slat (usec): min=28, max=512, avg=31.33, stdev=17.96 00:25:54.423 clat (usec): min=4054, max=33983, avg=9361.97, stdev=3649.57 00:25:54.423 lat (usec): min=4084, max=34015, avg=9393.30, stdev=3656.68 00:25:54.423 clat percentiles (usec): 00:25:54.423 | 1.00th=[ 5997], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7439], 00:25:54.423 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8979], 00:25:54.423 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[11207], 95.00th=[17171], 00:25:54.423 | 99.00th=[27919], 99.50th=[29492], 99.90th=[31851], 99.95th=[32113], 00:25:54.423 | 99.99th=[33817] 00:25:54.423 bw ( KiB/s): min=59456, max=90080, per=86.67%, avg=75296.00, stdev=12553.33, samples=4 00:25:54.423 iops : min= 3716, max= 5630, avg=4706.00, stdev=784.58, samples=4 00:25:54.423 lat (msec) : 4=0.69%, 10=81.58%, 20=14.17%, 50=3.47%, 100=0.09% 00:25:54.423 cpu : usr=78.94%, sys=15.67%, ctx=21, majf=0, minf=1 00:25:54.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:54.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:54.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:54.423 issued rwts: total=18317,9464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:54.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:54.423 00:25:54.423 Run status group 0 (all jobs): 00:25:54.423 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=286MiB (300MB), run=2005-2005msec 00:25:54.423 WRITE: bw=84.8MiB/s (89.0MB/s), 84.8MiB/s-84.8MiB/s (89.0MB/s-89.0MB/s), io=148MiB (155MB), run=1743-1743msec 00:25:54.423 09:00:11 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.423 09:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.423 09:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:54.423 09:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.423 09:00:11 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:25:54.423 09:00:11 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:25:54.423 09:00:11 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:25:54.423 09:00:11 -- host/fio.sh@84 -- # nvmftestfini 00:25:54.423 09:00:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:54.423 09:00:11 -- nvmf/common.sh@117 -- # sync 00:25:54.423 09:00:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:54.423 09:00:11 -- nvmf/common.sh@120 -- # set +e 00:25:54.423 09:00:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:54.423 09:00:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:54.423 rmmod nvme_tcp 00:25:54.423 rmmod nvme_fabrics 00:25:54.423 rmmod nvme_keyring 00:25:54.423 09:00:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:54.423 09:00:11 -- nvmf/common.sh@124 -- # set -e 00:25:54.423 09:00:11 -- nvmf/common.sh@125 -- # return 0 00:25:54.423 09:00:11 -- nvmf/common.sh@478 -- # '[' -n 2174105 ']' 00:25:54.423 09:00:11 -- nvmf/common.sh@479 -- # killprocess 2174105 00:25:54.423 09:00:11 -- common/autotest_common.sh@936 -- # '[' -z 2174105 ']' 00:25:54.423 09:00:11 -- common/autotest_common.sh@940 -- # kill -0 2174105 00:25:54.423 09:00:11 -- common/autotest_common.sh@941 -- # uname 00:25:54.423 09:00:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:54.423 09:00:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2174105 00:25:54.423 09:00:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:54.423 09:00:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:54.423 09:00:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2174105' 00:25:54.423 killing process with pid 2174105 00:25:54.423 09:00:11 -- common/autotest_common.sh@955 -- # kill 2174105 00:25:54.423 09:00:11 -- common/autotest_common.sh@960 -- # wait 2174105 00:25:54.423 09:00:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:54.423 09:00:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:54.423 09:00:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:54.423 09:00:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.423 09:00:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:54.423 09:00:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.423 09:00:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.423 09:00:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.021 09:00:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:57.021 00:25:57.021 real 0m15.941s 00:25:57.021 user 0m46.571s 00:25:57.021 sys 0m7.607s 00:25:57.021 09:00:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:57.021 09:00:13 -- common/autotest_common.sh@10 -- # set +x 00:25:57.021 ************************************ 00:25:57.021 END TEST nvmf_fio_host 00:25:57.021 ************************************ 00:25:57.021 09:00:13 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:57.021 09:00:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:57.021 09:00:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:57.021 09:00:13 -- common/autotest_common.sh@10 -- # set +x 00:25:57.021 ************************************ 00:25:57.021 START TEST nvmf_failover 00:25:57.021 ************************************ 00:25:57.021 09:00:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:57.021 * Looking for test storage... 00:25:57.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:57.021 09:00:14 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.021 09:00:14 -- nvmf/common.sh@7 -- # uname -s 00:25:57.021 09:00:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.021 09:00:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.021 09:00:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.021 09:00:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.021 09:00:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.021 09:00:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.021 09:00:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.021 09:00:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.021 09:00:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.021 09:00:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.021 09:00:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:57.021 09:00:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:57.021 09:00:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.021 09:00:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.021 09:00:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.021 09:00:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.021 09:00:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.021 09:00:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.021 09:00:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.021 09:00:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.021 09:00:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.021 09:00:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.021 09:00:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.021 09:00:14 -- paths/export.sh@5 -- # export PATH 00:25:57.021 09:00:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.021 09:00:14 -- nvmf/common.sh@47 -- # : 0 00:25:57.021 09:00:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:57.021 09:00:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:57.021 09:00:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.021 09:00:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.021 09:00:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.021 09:00:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:57.021 09:00:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:57.021 09:00:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:57.021 09:00:14 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:57.021 09:00:14 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:57.021 09:00:14 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:57.021 09:00:14 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:57.021 09:00:14 -- host/failover.sh@18 -- # nvmftestinit 00:25:57.021 09:00:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:57.021 09:00:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.021 09:00:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:57.021 09:00:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:57.021 09:00:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:57.021 09:00:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.021 09:00:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.021 09:00:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.021 09:00:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:57.021 09:00:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:57.021 09:00:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:57.021 09:00:14 -- common/autotest_common.sh@10 -- # set +x 00:26:03.583 09:00:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:03.583 09:00:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:03.583 09:00:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:03.583 09:00:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:03.583 09:00:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:03.583 09:00:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:03.583 09:00:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:03.583 09:00:20 -- nvmf/common.sh@295 -- # net_devs=() 00:26:03.583 09:00:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:03.583 09:00:20 -- nvmf/common.sh@296 -- # e810=() 00:26:03.583 09:00:20 -- nvmf/common.sh@296 -- # local -ga e810 00:26:03.583 09:00:20 -- nvmf/common.sh@297 -- # x722=() 00:26:03.583 09:00:20 -- nvmf/common.sh@297 -- # local -ga x722 00:26:03.583 09:00:20 -- nvmf/common.sh@298 -- # mlx=() 00:26:03.583 09:00:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:03.583 09:00:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.583 09:00:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.583 09:00:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.583 09:00:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.583 09:00:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.583 09:00:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.583 09:00:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.583 09:00:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.583 09:00:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.583 09:00:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.583 09:00:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.583 09:00:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:03.583 09:00:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:03.583 09:00:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:03.583 09:00:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.583 09:00:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:03.583 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:03.583 09:00:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:03.583 09:00:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:03.583 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:03.583 09:00:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:03.583 09:00:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.583 09:00:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.583 09:00:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:03.583 09:00:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.583 09:00:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:03.583 Found net devices under 0000:af:00.0: cvl_0_0 00:26:03.583 09:00:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.583 09:00:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:03.583 09:00:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.583 09:00:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:03.583 09:00:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.583 09:00:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:03.583 Found net devices under 0000:af:00.1: cvl_0_1 00:26:03.583 09:00:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.583 09:00:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:03.583 09:00:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:03.583 09:00:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:03.583 09:00:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.583 09:00:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.583 09:00:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.583 09:00:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:03.583 09:00:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.583 09:00:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.583 09:00:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:03.583 09:00:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.583 09:00:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.583 09:00:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:03.583 09:00:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:03.583 09:00:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.583 09:00:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.583 09:00:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.583 09:00:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.583 09:00:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:03.583 09:00:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.583 09:00:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.583 09:00:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.583 09:00:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:03.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:26:03.583 00:26:03.583 --- 10.0.0.2 ping statistics --- 00:26:03.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.583 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:26:03.583 09:00:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:26:03.583 00:26:03.583 --- 10.0.0.1 ping statistics --- 00:26:03.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.583 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:26:03.583 09:00:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.583 09:00:20 -- nvmf/common.sh@411 -- # return 0 00:26:03.583 09:00:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:03.583 09:00:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.583 09:00:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:03.583 09:00:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.583 09:00:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:03.583 09:00:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:03.583 09:00:20 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:03.583 09:00:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:03.583 09:00:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:03.583 09:00:20 -- common/autotest_common.sh@10 -- # set +x 00:26:03.583 09:00:20 -- nvmf/common.sh@470 -- # nvmfpid=2179301 00:26:03.583 09:00:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:03.583 09:00:20 -- nvmf/common.sh@471 -- # waitforlisten 2179301 00:26:03.583 09:00:20 -- common/autotest_common.sh@817 -- # '[' -z 2179301 ']' 00:26:03.583 09:00:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.583 09:00:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:03.583 09:00:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.583 09:00:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:03.583 09:00:20 -- common/autotest_common.sh@10 -- # set +x 00:26:03.583 [2024-04-26 09:00:20.705761] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:26:03.583 [2024-04-26 09:00:20.705810] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.583 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.583 [2024-04-26 09:00:20.779757] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:03.842 [2024-04-26 09:00:20.853306] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.842 [2024-04-26 09:00:20.853341] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.842 [2024-04-26 09:00:20.853351] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.842 [2024-04-26 09:00:20.853359] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.842 [2024-04-26 09:00:20.853366] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.842 [2024-04-26 09:00:20.853470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.842 [2024-04-26 09:00:20.853492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.842 [2024-04-26 09:00:20.853495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.409 09:00:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:04.409 09:00:21 -- common/autotest_common.sh@850 -- # return 0 00:26:04.409 09:00:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:04.409 09:00:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:04.409 09:00:21 -- common/autotest_common.sh@10 -- # set +x 00:26:04.409 09:00:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.409 09:00:21 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:04.667 [2024-04-26 09:00:21.704993] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.667 09:00:21 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:04.667 Malloc0 00:26:04.925 09:00:21 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:04.925 09:00:22 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:05.183 09:00:22 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.440 [2024-04-26 09:00:22.465631] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.440 09:00:22 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:05.440 [2024-04-26 09:00:22.646120] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:05.440 09:00:22 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:05.698 [2024-04-26 09:00:22.818692] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:05.699 09:00:22 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:05.699 09:00:22 -- host/failover.sh@31 -- # bdevperf_pid=2179665 00:26:05.699 09:00:22 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:05.699 09:00:22 -- host/failover.sh@34 -- # waitforlisten 2179665 /var/tmp/bdevperf.sock 00:26:05.699 09:00:22 -- common/autotest_common.sh@817 -- # '[' -z 2179665 ']' 00:26:05.699 09:00:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:05.699 09:00:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:05.699 09:00:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:05.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:05.699 09:00:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:05.699 09:00:22 -- common/autotest_common.sh@10 -- # set +x 00:26:06.636 09:00:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:06.636 09:00:23 -- common/autotest_common.sh@850 -- # return 0 00:26:06.636 09:00:23 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.893 NVMe0n1 00:26:06.893 09:00:24 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:07.460 00:26:07.460 09:00:24 -- host/failover.sh@39 -- # run_test_pid=2179937 00:26:07.460 09:00:24 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:07.460 09:00:24 -- host/failover.sh@41 -- # sleep 1 00:26:08.395 09:00:25 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.655 [2024-04-26 09:00:25.656284] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.655 [2024-04-26 09:00:25.656562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656769] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656838] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.656995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.656 [2024-04-26 09:00:25.657141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 [2024-04-26 09:00:25.657329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a04b0 is same with the state(5) to be set 00:26:08.657 09:00:25 -- host/failover.sh@45 -- # sleep 3 00:26:11.943 09:00:28 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:11.943 00:26:11.943 09:00:28 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:11.943 [2024-04-26 09:00:29.112532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112593] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 [2024-04-26 09:00:29.112688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a1370 is same with the state(5) to be set 00:26:11.943 09:00:29 -- host/failover.sh@50 -- # sleep 3 00:26:15.242 09:00:32 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.242 [2024-04-26 09:00:32.302759] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.242 09:00:32 -- host/failover.sh@55 -- # sleep 1 00:26:16.177 09:00:33 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:16.447 [2024-04-26 09:00:33.502504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 [2024-04-26 09:00:33.502871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x265b040 is same with the state(5) to be set 00:26:16.448 09:00:33 -- host/failover.sh@59 -- # wait 2179937 00:26:23.029 0 00:26:23.029 09:00:39 -- host/failover.sh@61 -- # killprocess 2179665 00:26:23.029 09:00:39 -- common/autotest_common.sh@936 -- # '[' -z 2179665 ']' 00:26:23.029 09:00:39 -- common/autotest_common.sh@940 -- # kill -0 2179665 00:26:23.029 09:00:39 -- common/autotest_common.sh@941 -- # uname 00:26:23.029 09:00:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:23.029 09:00:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2179665 00:26:23.029 09:00:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:23.029 09:00:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:23.029 09:00:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2179665' 00:26:23.029 killing process with pid 2179665 00:26:23.029 09:00:39 -- common/autotest_common.sh@955 -- # kill 2179665 00:26:23.029 09:00:39 -- common/autotest_common.sh@960 -- # wait 2179665 00:26:23.029 09:00:39 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:23.029 [2024-04-26 09:00:22.880288] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:26:23.029 [2024-04-26 09:00:22.880344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179665 ] 00:26:23.029 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.029 [2024-04-26 09:00:22.950539] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.029 [2024-04-26 09:00:23.020162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.029 Running I/O for 15 seconds... 00:26:23.029 [2024-04-26 09:00:25.657630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.029 [2024-04-26 09:00:25.657667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.029 [2024-04-26 09:00:25.657686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.029 [2024-04-26 09:00:25.657697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.029 [2024-04-26 09:00:25.657708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.029 [2024-04-26 09:00:25.657718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.029 [2024-04-26 09:00:25.657729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.029 [2024-04-26 09:00:25.657738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.029 [2024-04-26 09:00:25.657748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.029 [2024-04-26 09:00:25.657758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.029 [2024-04-26 09:00:25.657768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.029 [2024-04-26 09:00:25.657777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.029 [2024-04-26 09:00:25.657788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.029 [2024-04-26 09:00:25.657797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.029 [2024-04-26 09:00:25.657808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.029 [2024-04-26 09:00:25.657817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.029 [2024-04-26 09:00:25.657828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.029 [2024-04-26 09:00:25.657837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.029 [2024-04-26 09:00:25.657847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.029 [2024-04-26 09:00:25.657857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.029 [2024-04-26 09:00:25.657868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.029 [2024-04-26 09:00:25.657877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.657892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.657901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.657913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.657922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.657933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.657942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.657954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.657964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.657975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.657985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.657995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.030 [2024-04-26 09:00:25.658673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.030 [2024-04-26 09:00:25.658684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.658986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.658995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.031 [2024-04-26 09:00:25.659468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.031 [2024-04-26 09:00:25.659477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-04-26 09:00:25.659497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-04-26 09:00:25.659516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-04-26 09:00:25.659537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-04-26 09:00:25.659854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-04-26 09:00:25.659874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-04-26 09:00:25.659893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.032 [2024-04-26 09:00:25.659914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.659984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.659993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.660013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.660033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.660052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.660071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.660091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.660111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.660130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.660150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.660172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.660192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.032 [2024-04-26 09:00:25.660211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x151d130 is same with the state(5) to be set 00:26:23.032 [2024-04-26 09:00:25.660233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.032 [2024-04-26 09:00:25.660242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.032 [2024-04-26 09:00:25.660250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101104 len:8 PRP1 0x0 PRP2 0x0 00:26:23.032 [2024-04-26 09:00:25.660260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.032 [2024-04-26 09:00:25.660306] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x151d130 was disconnected and freed. reset controller. 00:26:23.033 [2024-04-26 09:00:25.660318] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:23.033 [2024-04-26 09:00:25.660342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.033 [2024-04-26 09:00:25.660351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:25.660361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.033 [2024-04-26 09:00:25.660371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:25.660381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.033 [2024-04-26 09:00:25.660390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:25.660399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.033 [2024-04-26 09:00:25.660409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:25.660419] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.033 [2024-04-26 09:00:25.663109] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.033 [2024-04-26 09:00:25.663140] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fe700 (9): Bad file descriptor 00:26:23.033 [2024-04-26 09:00:25.783003] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:23.033 [2024-04-26 09:00:29.113058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-04-26 09:00:29.113148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-04-26 09:00:29.113169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-04-26 09:00:29.113188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-04-26 09:00:29.113208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-04-26 09:00:29.113228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-04-26 09:00:29.113248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-04-26 09:00:29.113267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.033 [2024-04-26 09:00:29.113287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.033 [2024-04-26 09:00:29.113722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.033 [2024-04-26 09:00:29.113731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.113987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.113998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.114007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.114026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-04-26 09:00:29.114045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-04-26 09:00:29.114065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-04-26 09:00:29.114085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-04-26 09:00:29.114105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-04-26 09:00:29.114126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-04-26 09:00:29.114146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-04-26 09:00:29.114166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-04-26 09:00:29.114187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-04-26 09:00:29.114210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.034 [2024-04-26 09:00:29.114231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.114250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.114270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.114293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.114314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.114336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.114357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.114378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.114401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.034 [2024-04-26 09:00:29.114412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.034 [2024-04-26 09:00:29.114421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.114586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.114605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.114624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.114645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.114665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.114684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.114704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.114981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.114991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.115000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.115011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.115020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.115030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.115039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.115050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.115059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.115070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.115078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.115091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.115100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.115111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.115120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.115130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.115139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.115151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.035 [2024-04-26 09:00:29.115160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.035 [2024-04-26 09:00:29.115171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.035 [2024-04-26 09:00:29.115180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:29.115637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.036 [2024-04-26 09:00:29.115668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.036 [2024-04-26 09:00:29.115676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55040 len:8 PRP1 0x0 PRP2 0x0 00:26:23.036 [2024-04-26 09:00:29.115686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115732] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x150abf0 was disconnected and freed. reset controller. 00:26:23.036 [2024-04-26 09:00:29.115743] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:23.036 [2024-04-26 09:00:29.115765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.036 [2024-04-26 09:00:29.115774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.036 [2024-04-26 09:00:29.115795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.036 [2024-04-26 09:00:29.115814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.036 [2024-04-26 09:00:29.115832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:29.115840] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.036 [2024-04-26 09:00:29.118503] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.036 [2024-04-26 09:00:29.118533] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fe700 (9): Bad file descriptor 00:26:23.036 [2024-04-26 09:00:29.280002] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:23.036 [2024-04-26 09:00:33.503076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:33.503111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:33.503130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:33.503140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:33.503151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:33.503161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:33.503172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:33.503182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:33.503192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:33.503205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:33.503216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:33.503225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:33.503236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:33.503245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:33.503256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.036 [2024-04-26 09:00:33.503265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.036 [2024-04-26 09:00:33.503276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.037 [2024-04-26 09:00:33.503617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.037 [2024-04-26 09:00:33.503638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.037 [2024-04-26 09:00:33.503657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.037 [2024-04-26 09:00:33.503678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.037 [2024-04-26 09:00:33.503697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.037 [2024-04-26 09:00:33.503718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.037 [2024-04-26 09:00:33.503739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.037 [2024-04-26 09:00:33.503759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.503985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.503997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.504006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.504017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.504026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.504037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.504046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.504057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.504066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.037 [2024-04-26 09:00:33.504077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.037 [2024-04-26 09:00:33.504087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.038 [2024-04-26 09:00:33.504581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.038 [2024-04-26 09:00:33.504601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.038 [2024-04-26 09:00:33.504621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.038 [2024-04-26 09:00:33.504641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.038 [2024-04-26 09:00:33.504661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.038 [2024-04-26 09:00:33.504681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.038 [2024-04-26 09:00:33.504700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.038 [2024-04-26 09:00:33.504721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.038 [2024-04-26 09:00:33.504740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.038 [2024-04-26 09:00:33.504763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.038 [2024-04-26 09:00:33.504774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.038 [2024-04-26 09:00:33.504783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.504794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.504803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.504814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.504824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.504834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.504843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.504854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.504863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.504874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.504883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.504893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.504902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.504913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.504923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.504934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.504943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.504954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.504963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.504973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.504983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.504994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-04-26 09:00:33.505244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-04-26 09:00:33.505266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-04-26 09:00:33.505286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-04-26 09:00:33.505305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-04-26 09:00:33.505325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-04-26 09:00:33.505346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-04-26 09:00:33.505365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-04-26 09:00:33.505386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.039 [2024-04-26 09:00:33.505554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.039 [2024-04-26 09:00:33.505576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.039 [2024-04-26 09:00:33.505588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-04-26 09:00:33.505598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.040 [2024-04-26 09:00:33.505609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-04-26 09:00:33.505618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.040 [2024-04-26 09:00:33.505630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-04-26 09:00:33.505640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.040 [2024-04-26 09:00:33.505651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-04-26 09:00:33.505660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.040 [2024-04-26 09:00:33.505672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-04-26 09:00:33.505681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.040 [2024-04-26 09:00:33.505693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:23.040 [2024-04-26 09:00:33.505703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.040 [2024-04-26 09:00:33.505713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150abf0 is same with the state(5) to be set 00:26:23.040 [2024-04-26 09:00:33.505724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:23.040 [2024-04-26 09:00:33.505732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:23.040 [2024-04-26 09:00:33.505740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115352 len:8 PRP1 0x0 PRP2 0x0 00:26:23.040 [2024-04-26 09:00:33.505749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.040 [2024-04-26 09:00:33.505794] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x150abf0 was disconnected and freed. reset controller. 00:26:23.040 [2024-04-26 09:00:33.505805] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:23.040 [2024-04-26 09:00:33.505827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.040 [2024-04-26 09:00:33.505840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.040 [2024-04-26 09:00:33.505850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.040 [2024-04-26 09:00:33.505859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.040 [2024-04-26 09:00:33.505869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.040 [2024-04-26 09:00:33.505879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.040 [2024-04-26 09:00:33.505888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:23.040 [2024-04-26 09:00:33.505898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:23.040 [2024-04-26 09:00:33.505907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.040 [2024-04-26 09:00:33.508589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.040 [2024-04-26 09:00:33.508619] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fe700 (9): Bad file descriptor 00:26:23.040 [2024-04-26 09:00:33.549595] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:23.040 00:26:23.040 Latency(us) 00:26:23.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.040 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:23.040 Verification LBA range: start 0x0 length 0x4000 00:26:23.040 NVMe0n1 : 15.01 11117.34 43.43 1063.85 0.00 10487.39 799.54 25375.54 00:26:23.040 =================================================================================================================== 00:26:23.040 Total : 11117.34 43.43 1063.85 0.00 10487.39 799.54 25375.54 00:26:23.040 Received shutdown signal, test time was about 15.000000 seconds 00:26:23.040 00:26:23.040 Latency(us) 00:26:23.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.040 =================================================================================================================== 00:26:23.040 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.040 09:00:39 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:23.040 09:00:39 -- host/failover.sh@65 -- # count=3 00:26:23.040 09:00:39 -- host/failover.sh@67 -- # (( count != 3 )) 00:26:23.040 09:00:39 -- host/failover.sh@73 -- # bdevperf_pid=2182582 00:26:23.040 09:00:39 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:23.040 09:00:39 -- host/failover.sh@75 -- # waitforlisten 2182582 /var/tmp/bdevperf.sock 00:26:23.040 09:00:39 -- common/autotest_common.sh@817 -- # '[' -z 2182582 ']' 00:26:23.040 09:00:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:23.040 09:00:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:23.040 09:00:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:23.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:23.040 09:00:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:23.040 09:00:39 -- common/autotest_common.sh@10 -- # set +x 00:26:23.604 09:00:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:23.604 09:00:40 -- common/autotest_common.sh@850 -- # return 0 00:26:23.604 09:00:40 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:23.862 [2024-04-26 09:00:40.930464] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:23.862 09:00:40 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:24.120 [2024-04-26 09:00:41.119035] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:24.120 09:00:41 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:24.379 NVMe0n1 00:26:24.379 09:00:41 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:24.637 00:26:24.637 09:00:41 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:24.895 00:26:24.895 09:00:42 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:24.895 09:00:42 -- host/failover.sh@82 -- # grep -q NVMe0 00:26:25.153 09:00:42 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:25.411 09:00:42 -- host/failover.sh@87 -- # sleep 3 00:26:28.751 09:00:45 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:28.751 09:00:45 -- host/failover.sh@88 -- # grep -q NVMe0 00:26:28.751 09:00:45 -- host/failover.sh@90 -- # run_test_pid=2183421 00:26:28.751 09:00:45 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:28.752 09:00:45 -- host/failover.sh@92 -- # wait 2183421 00:26:29.683 0 00:26:29.683 09:00:46 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:29.683 [2024-04-26 09:00:39.960046] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:26:29.683 [2024-04-26 09:00:39.960104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182582 ] 00:26:29.683 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.683 [2024-04-26 09:00:40.032183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.683 [2024-04-26 09:00:40.111642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.683 [2024-04-26 09:00:42.424844] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:29.683 [2024-04-26 09:00:42.424893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.683 [2024-04-26 09:00:42.424906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.683 [2024-04-26 09:00:42.424917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.683 [2024-04-26 09:00:42.424927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.683 [2024-04-26 09:00:42.424937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.683 [2024-04-26 09:00:42.424946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.683 [2024-04-26 09:00:42.424956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.683 [2024-04-26 09:00:42.424965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.683 [2024-04-26 09:00:42.424974] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:29.683 [2024-04-26 09:00:42.425002] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:29.683 [2024-04-26 09:00:42.425019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1731700 (9): Bad file descriptor 00:26:29.683 [2024-04-26 09:00:42.474794] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:29.683 Running I/O for 1 seconds... 00:26:29.683 00:26:29.683 Latency(us) 00:26:29.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.683 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:29.683 Verification LBA range: start 0x0 length 0x4000 00:26:29.683 NVMe0n1 : 1.00 11051.11 43.17 0.00 0.00 11541.48 2385.51 17301.50 00:26:29.683 =================================================================================================================== 00:26:29.683 Total : 11051.11 43.17 0.00 0.00 11541.48 2385.51 17301.50 00:26:29.683 09:00:46 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:29.683 09:00:46 -- host/failover.sh@95 -- # grep -q NVMe0 00:26:29.940 09:00:46 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:29.940 09:00:47 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:29.940 09:00:47 -- host/failover.sh@99 -- # grep -q NVMe0 00:26:30.197 09:00:47 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:30.454 09:00:47 -- host/failover.sh@101 -- # sleep 3 00:26:33.734 09:00:50 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:33.734 09:00:50 -- host/failover.sh@103 -- # grep -q NVMe0 00:26:33.734 09:00:50 -- host/failover.sh@108 -- # killprocess 2182582 00:26:33.734 09:00:50 -- common/autotest_common.sh@936 -- # '[' -z 2182582 ']' 00:26:33.734 09:00:50 -- common/autotest_common.sh@940 -- # kill -0 2182582 00:26:33.734 09:00:50 -- common/autotest_common.sh@941 -- # uname 00:26:33.734 09:00:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:33.734 09:00:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2182582 00:26:33.734 09:00:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:33.734 09:00:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:33.734 09:00:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2182582' 00:26:33.734 killing process with pid 2182582 00:26:33.734 09:00:50 -- common/autotest_common.sh@955 -- # kill 2182582 00:26:33.734 09:00:50 -- common/autotest_common.sh@960 -- # wait 2182582 00:26:33.734 09:00:50 -- host/failover.sh@110 -- # sync 00:26:33.734 09:00:50 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.993 09:00:51 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:33.993 09:00:51 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:33.993 09:00:51 -- host/failover.sh@116 -- # nvmftestfini 00:26:33.993 09:00:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:33.993 09:00:51 -- nvmf/common.sh@117 -- # sync 00:26:33.993 09:00:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:33.993 09:00:51 -- nvmf/common.sh@120 -- # set +e 00:26:33.993 09:00:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:33.993 09:00:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:33.993 rmmod nvme_tcp 00:26:33.993 rmmod nvme_fabrics 00:26:33.993 rmmod nvme_keyring 00:26:33.993 09:00:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:33.993 09:00:51 -- nvmf/common.sh@124 -- # set -e 00:26:33.993 09:00:51 -- nvmf/common.sh@125 -- # return 0 00:26:33.993 09:00:51 -- nvmf/common.sh@478 -- # '[' -n 2179301 ']' 00:26:33.993 09:00:51 -- nvmf/common.sh@479 -- # killprocess 2179301 00:26:33.993 09:00:51 -- common/autotest_common.sh@936 -- # '[' -z 2179301 ']' 00:26:33.993 09:00:51 -- common/autotest_common.sh@940 -- # kill -0 2179301 00:26:33.993 09:00:51 -- common/autotest_common.sh@941 -- # uname 00:26:33.993 09:00:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:33.993 09:00:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2179301 00:26:34.251 09:00:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:34.251 09:00:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:34.251 09:00:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2179301' 00:26:34.251 killing process with pid 2179301 00:26:34.251 09:00:51 -- common/autotest_common.sh@955 -- # kill 2179301 00:26:34.251 09:00:51 -- common/autotest_common.sh@960 -- # wait 2179301 00:26:34.251 09:00:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:34.251 09:00:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:34.251 09:00:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:34.251 09:00:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:34.251 09:00:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:34.251 09:00:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.251 09:00:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.251 09:00:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.784 09:00:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:36.784 00:26:36.784 real 0m39.659s 00:26:36.784 user 2m3.016s 00:26:36.784 sys 0m9.831s 00:26:36.784 09:00:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:36.784 09:00:53 -- common/autotest_common.sh@10 -- # set +x 00:26:36.784 ************************************ 00:26:36.784 END TEST nvmf_failover 00:26:36.784 ************************************ 00:26:36.784 09:00:53 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:36.784 09:00:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:36.784 09:00:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:36.784 09:00:53 -- common/autotest_common.sh@10 -- # set +x 00:26:36.784 ************************************ 00:26:36.784 START TEST nvmf_discovery 00:26:36.784 ************************************ 00:26:36.784 09:00:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:36.784 * Looking for test storage... 00:26:36.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:36.784 09:00:53 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:36.784 09:00:53 -- nvmf/common.sh@7 -- # uname -s 00:26:36.784 09:00:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.784 09:00:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.784 09:00:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.784 09:00:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.784 09:00:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:36.784 09:00:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:36.784 09:00:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.784 09:00:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:36.784 09:00:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.785 09:00:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:36.785 09:00:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:36.785 09:00:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:36.785 09:00:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.785 09:00:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:36.785 09:00:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:36.785 09:00:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:36.785 09:00:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:36.785 09:00:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.785 09:00:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.785 09:00:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.785 09:00:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.785 09:00:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.785 09:00:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.785 09:00:53 -- paths/export.sh@5 -- # export PATH 00:26:36.785 09:00:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.785 09:00:53 -- nvmf/common.sh@47 -- # : 0 00:26:36.785 09:00:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:36.785 09:00:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:36.785 09:00:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:36.785 09:00:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.785 09:00:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.785 09:00:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:36.785 09:00:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:36.785 09:00:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:36.785 09:00:53 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:36.785 09:00:53 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:36.785 09:00:53 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:36.785 09:00:53 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:36.785 09:00:53 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:36.785 09:00:53 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:36.785 09:00:53 -- host/discovery.sh@25 -- # nvmftestinit 00:26:36.785 09:00:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:36.785 09:00:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.785 09:00:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:36.785 09:00:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:36.785 09:00:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:36.785 09:00:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.785 09:00:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.785 09:00:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.785 09:00:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:36.785 09:00:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:36.785 09:00:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:36.785 09:00:53 -- common/autotest_common.sh@10 -- # set +x 00:26:43.481 09:01:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:43.481 09:01:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:43.481 09:01:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:43.481 09:01:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:43.481 09:01:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:43.481 09:01:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:43.481 09:01:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:43.481 09:01:00 -- nvmf/common.sh@295 -- # net_devs=() 00:26:43.481 09:01:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:43.481 09:01:00 -- nvmf/common.sh@296 -- # e810=() 00:26:43.481 09:01:00 -- nvmf/common.sh@296 -- # local -ga e810 00:26:43.481 09:01:00 -- nvmf/common.sh@297 -- # x722=() 00:26:43.481 09:01:00 -- nvmf/common.sh@297 -- # local -ga x722 00:26:43.481 09:01:00 -- nvmf/common.sh@298 -- # mlx=() 00:26:43.481 09:01:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:43.481 09:01:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.481 09:01:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.481 09:01:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.481 09:01:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.481 09:01:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.481 09:01:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.481 09:01:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.481 09:01:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.481 09:01:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.481 09:01:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.481 09:01:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.481 09:01:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:43.481 09:01:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:43.481 09:01:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:43.481 09:01:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.481 09:01:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:43.481 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:43.481 09:01:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.481 09:01:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:43.481 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:43.481 09:01:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:43.481 09:01:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.481 09:01:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.481 09:01:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:43.481 09:01:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.481 09:01:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:43.481 Found net devices under 0000:af:00.0: cvl_0_0 00:26:43.481 09:01:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.481 09:01:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.481 09:01:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.481 09:01:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:43.481 09:01:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.481 09:01:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:43.481 Found net devices under 0000:af:00.1: cvl_0_1 00:26:43.481 09:01:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.481 09:01:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:43.481 09:01:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:43.481 09:01:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:43.481 09:01:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.481 09:01:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.481 09:01:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.481 09:01:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:43.481 09:01:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.481 09:01:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.481 09:01:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:43.481 09:01:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.481 09:01:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.481 09:01:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:43.481 09:01:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:43.481 09:01:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:43.481 09:01:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:43.481 09:01:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:43.481 09:01:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:43.481 09:01:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:43.481 09:01:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:43.481 09:01:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:43.481 09:01:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:43.481 09:01:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:43.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:26:43.481 00:26:43.481 --- 10.0.0.2 ping statistics --- 00:26:43.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.481 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:26:43.481 09:01:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:43.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:26:43.481 00:26:43.481 --- 10.0.0.1 ping statistics --- 00:26:43.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.481 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:26:43.481 09:01:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.481 09:01:00 -- nvmf/common.sh@411 -- # return 0 00:26:43.481 09:01:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:43.481 09:01:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.481 09:01:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:43.481 09:01:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.481 09:01:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:43.481 09:01:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:43.481 09:01:00 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:43.481 09:01:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:43.481 09:01:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:43.481 09:01:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.744 09:01:00 -- nvmf/common.sh@470 -- # nvmfpid=2188164 00:26:43.745 09:01:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:43.745 09:01:00 -- nvmf/common.sh@471 -- # waitforlisten 2188164 00:26:43.745 09:01:00 -- common/autotest_common.sh@817 -- # '[' -z 2188164 ']' 00:26:43.745 09:01:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.745 09:01:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:43.745 09:01:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.745 09:01:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:43.745 09:01:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.745 [2024-04-26 09:01:00.776480] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:26:43.745 [2024-04-26 09:01:00.776526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.745 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.745 [2024-04-26 09:01:00.848900] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.745 [2024-04-26 09:01:00.914745] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.745 [2024-04-26 09:01:00.914787] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.745 [2024-04-26 09:01:00.914797] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.745 [2024-04-26 09:01:00.914806] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.745 [2024-04-26 09:01:00.914813] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.745 [2024-04-26 09:01:00.914835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.681 09:01:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:44.681 09:01:01 -- common/autotest_common.sh@850 -- # return 0 00:26:44.681 09:01:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:44.681 09:01:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:44.681 09:01:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.681 09:01:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.681 09:01:01 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:44.681 09:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.681 09:01:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.681 [2024-04-26 09:01:01.613208] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.681 09:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.681 09:01:01 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:44.681 09:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.681 09:01:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.681 [2024-04-26 09:01:01.625378] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:44.681 09:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.681 09:01:01 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:44.681 09:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.681 09:01:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.681 null0 00:26:44.681 09:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.681 09:01:01 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:44.681 09:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.681 09:01:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.681 null1 00:26:44.681 09:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.681 09:01:01 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:44.681 09:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.681 09:01:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.681 09:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.681 09:01:01 -- host/discovery.sh@45 -- # hostpid=2188320 00:26:44.681 09:01:01 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:44.681 09:01:01 -- host/discovery.sh@46 -- # waitforlisten 2188320 /tmp/host.sock 00:26:44.681 09:01:01 -- common/autotest_common.sh@817 -- # '[' -z 2188320 ']' 00:26:44.681 09:01:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:26:44.681 09:01:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:44.681 09:01:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:44.681 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:44.681 09:01:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:44.681 09:01:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.681 [2024-04-26 09:01:01.701213] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:26:44.681 [2024-04-26 09:01:01.701261] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2188320 ] 00:26:44.681 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.681 [2024-04-26 09:01:01.771293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.681 [2024-04-26 09:01:01.843686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.617 09:01:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:45.617 09:01:02 -- common/autotest_common.sh@850 -- # return 0 00:26:45.617 09:01:02 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:45.617 09:01:02 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:45.617 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.617 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.617 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.617 09:01:02 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:45.617 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.617 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.617 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.617 09:01:02 -- host/discovery.sh@72 -- # notify_id=0 00:26:45.617 09:01:02 -- host/discovery.sh@83 -- # get_subsystem_names 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.617 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.617 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # sort 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # xargs 00:26:45.617 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.617 09:01:02 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:45.617 09:01:02 -- host/discovery.sh@84 -- # get_bdev_list 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:45.617 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # sort 00:26:45.617 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # xargs 00:26:45.617 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.617 09:01:02 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:45.617 09:01:02 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:45.617 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.617 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.617 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.617 09:01:02 -- host/discovery.sh@87 -- # get_subsystem_names 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # xargs 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.617 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.617 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # sort 00:26:45.617 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.617 09:01:02 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:45.617 09:01:02 -- host/discovery.sh@88 -- # get_bdev_list 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:45.617 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # sort 00:26:45.617 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # xargs 00:26:45.617 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.617 09:01:02 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:45.617 09:01:02 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:45.617 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.617 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.617 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.617 09:01:02 -- host/discovery.sh@91 -- # get_subsystem_names 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.617 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.617 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # sort 00:26:45.617 09:01:02 -- host/discovery.sh@59 -- # xargs 00:26:45.617 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.617 09:01:02 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:45.617 09:01:02 -- host/discovery.sh@92 -- # get_bdev_list 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:45.617 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.617 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # sort 00:26:45.617 09:01:02 -- host/discovery.sh@55 -- # xargs 00:26:45.617 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.617 09:01:02 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:45.618 09:01:02 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:45.618 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.618 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.875 [2024-04-26 09:01:02.864635] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.875 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.875 09:01:02 -- host/discovery.sh@97 -- # get_subsystem_names 00:26:45.875 09:01:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.875 09:01:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.875 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.875 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.875 09:01:02 -- host/discovery.sh@59 -- # sort 00:26:45.875 09:01:02 -- host/discovery.sh@59 -- # xargs 00:26:45.875 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.875 09:01:02 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:45.875 09:01:02 -- host/discovery.sh@98 -- # get_bdev_list 00:26:45.875 09:01:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.875 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.875 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.875 09:01:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:45.875 09:01:02 -- host/discovery.sh@55 -- # xargs 00:26:45.875 09:01:02 -- host/discovery.sh@55 -- # sort 00:26:45.875 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.875 09:01:02 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:45.875 09:01:02 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:45.875 09:01:02 -- host/discovery.sh@79 -- # expected_count=0 00:26:45.875 09:01:02 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:45.875 09:01:02 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:45.875 09:01:02 -- common/autotest_common.sh@901 -- # local max=10 00:26:45.875 09:01:02 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:45.875 09:01:02 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:45.875 09:01:02 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:45.875 09:01:02 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:45.875 09:01:02 -- host/discovery.sh@74 -- # jq '. | length' 00:26:45.875 09:01:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.875 09:01:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.875 09:01:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.875 09:01:03 -- host/discovery.sh@74 -- # notification_count=0 00:26:45.875 09:01:03 -- host/discovery.sh@75 -- # notify_id=0 00:26:45.875 09:01:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:45.875 09:01:03 -- common/autotest_common.sh@904 -- # return 0 00:26:45.875 09:01:03 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:45.875 09:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.875 09:01:03 -- common/autotest_common.sh@10 -- # set +x 00:26:45.875 09:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.875 09:01:03 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:45.875 09:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:45.875 09:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:26:45.875 09:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:45.875 09:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:45.875 09:01:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:45.875 09:01:03 -- host/discovery.sh@59 -- # sort 00:26:45.875 09:01:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.875 09:01:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.875 09:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.875 09:01:03 -- common/autotest_common.sh@10 -- # set +x 00:26:45.875 09:01:03 -- host/discovery.sh@59 -- # xargs 00:26:45.875 09:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.875 09:01:03 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:26:45.875 09:01:03 -- common/autotest_common.sh@906 -- # sleep 1 00:26:46.441 [2024-04-26 09:01:03.533765] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:46.441 [2024-04-26 09:01:03.533783] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:46.441 [2024-04-26 09:01:03.533797] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:46.441 [2024-04-26 09:01:03.620064] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:46.699 [2024-04-26 09:01:03.807934] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:46.700 [2024-04-26 09:01:03.807953] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:46.959 09:01:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:46.959 09:01:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:46.959 09:01:04 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:46.959 09:01:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:46.959 09:01:04 -- host/discovery.sh@59 -- # xargs 00:26:46.959 09:01:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:46.959 09:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.959 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:26:46.959 09:01:04 -- host/discovery.sh@59 -- # sort 00:26:46.959 09:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.959 09:01:04 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.959 09:01:04 -- common/autotest_common.sh@904 -- # return 0 00:26:46.959 09:01:04 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:46.959 09:01:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:46.959 09:01:04 -- common/autotest_common.sh@901 -- # local max=10 00:26:46.959 09:01:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:46.959 09:01:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:46.959 09:01:04 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:46.959 09:01:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.959 09:01:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:46.959 09:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.959 09:01:04 -- host/discovery.sh@55 -- # sort 00:26:46.959 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:26:46.959 09:01:04 -- host/discovery.sh@55 -- # xargs 00:26:46.959 09:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.959 09:01:04 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:46.959 09:01:04 -- common/autotest_common.sh@904 -- # return 0 00:26:46.959 09:01:04 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:46.959 09:01:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:46.959 09:01:04 -- common/autotest_common.sh@901 -- # local max=10 00:26:46.959 09:01:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:46.959 09:01:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:46.959 09:01:04 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:26:46.959 09:01:04 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:46.959 09:01:04 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:46.959 09:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.959 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:26:46.959 09:01:04 -- host/discovery.sh@63 -- # sort -n 00:26:46.959 09:01:04 -- host/discovery.sh@63 -- # xargs 00:26:46.959 09:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.218 09:01:04 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:26:47.218 09:01:04 -- common/autotest_common.sh@904 -- # return 0 00:26:47.218 09:01:04 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:47.218 09:01:04 -- host/discovery.sh@79 -- # expected_count=1 00:26:47.218 09:01:04 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:47.218 09:01:04 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:47.218 09:01:04 -- common/autotest_common.sh@901 -- # local max=10 00:26:47.218 09:01:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:47.218 09:01:04 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:47.218 09:01:04 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:47.218 09:01:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:47.218 09:01:04 -- host/discovery.sh@74 -- # jq '. | length' 00:26:47.218 09:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.218 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.218 09:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.218 09:01:04 -- host/discovery.sh@74 -- # notification_count=1 00:26:47.218 09:01:04 -- host/discovery.sh@75 -- # notify_id=1 00:26:47.218 09:01:04 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:47.218 09:01:04 -- common/autotest_common.sh@904 -- # return 0 00:26:47.218 09:01:04 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:47.219 09:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.219 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.219 09:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.219 09:01:04 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:47.219 09:01:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:47.219 09:01:04 -- common/autotest_common.sh@901 -- # local max=10 00:26:47.219 09:01:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:47.219 09:01:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:47.219 09:01:04 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:47.219 09:01:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.219 09:01:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.219 09:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.219 09:01:04 -- host/discovery.sh@55 -- # sort 00:26:47.219 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.219 09:01:04 -- host/discovery.sh@55 -- # xargs 00:26:47.219 09:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.219 09:01:04 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:47.219 09:01:04 -- common/autotest_common.sh@904 -- # return 0 00:26:47.219 09:01:04 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:47.219 09:01:04 -- host/discovery.sh@79 -- # expected_count=1 00:26:47.219 09:01:04 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:47.219 09:01:04 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:47.219 09:01:04 -- common/autotest_common.sh@901 -- # local max=10 00:26:47.219 09:01:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:47.219 09:01:04 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:47.219 09:01:04 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:47.219 09:01:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:47.219 09:01:04 -- host/discovery.sh@74 -- # jq '. | length' 00:26:47.219 09:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.219 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.219 09:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.219 09:01:04 -- host/discovery.sh@74 -- # notification_count=1 00:26:47.219 09:01:04 -- host/discovery.sh@75 -- # notify_id=2 00:26:47.219 09:01:04 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:47.219 09:01:04 -- common/autotest_common.sh@904 -- # return 0 00:26:47.219 09:01:04 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:47.219 09:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.219 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.219 [2024-04-26 09:01:04.376643] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:47.219 [2024-04-26 09:01:04.377895] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:47.219 [2024-04-26 09:01:04.377916] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:47.219 09:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.219 09:01:04 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:47.219 09:01:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:47.219 09:01:04 -- common/autotest_common.sh@901 -- # local max=10 00:26:47.219 09:01:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:47.219 09:01:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:47.219 09:01:04 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:47.219 09:01:04 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:47.219 09:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.219 09:01:04 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:47.219 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.219 09:01:04 -- host/discovery.sh@59 -- # sort 00:26:47.219 09:01:04 -- host/discovery.sh@59 -- # xargs 00:26:47.219 09:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.219 09:01:04 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.219 09:01:04 -- common/autotest_common.sh@904 -- # return 0 00:26:47.219 09:01:04 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:47.219 09:01:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:47.219 09:01:04 -- common/autotest_common.sh@901 -- # local max=10 00:26:47.219 09:01:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:47.219 09:01:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:47.219 09:01:04 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:47.219 09:01:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.219 09:01:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.219 09:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.219 09:01:04 -- host/discovery.sh@55 -- # sort 00:26:47.219 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.219 09:01:04 -- host/discovery.sh@55 -- # xargs 00:26:47.482 09:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.482 09:01:04 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:47.482 09:01:04 -- common/autotest_common.sh@904 -- # return 0 00:26:47.482 09:01:04 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:47.482 09:01:04 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:47.482 09:01:04 -- common/autotest_common.sh@901 -- # local max=10 00:26:47.482 09:01:04 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:47.482 09:01:04 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:47.482 09:01:04 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:26:47.482 09:01:04 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:47.482 09:01:04 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:47.482 09:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.482 09:01:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.482 09:01:04 -- host/discovery.sh@63 -- # sort -n 00:26:47.482 09:01:04 -- host/discovery.sh@63 -- # xargs 00:26:47.482 09:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.482 [2024-04-26 09:01:04.507301] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:47.482 09:01:04 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:47.482 09:01:04 -- common/autotest_common.sh@906 -- # sleep 1 00:26:47.482 [2024-04-26 09:01:04.609166] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:47.482 [2024-04-26 09:01:04.609183] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:47.482 [2024-04-26 09:01:04.609190] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:48.418 09:01:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:48.418 09:01:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:48.418 09:01:05 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:26:48.418 09:01:05 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:48.418 09:01:05 -- host/discovery.sh@63 -- # xargs 00:26:48.418 09:01:05 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:48.418 09:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.418 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.418 09:01:05 -- host/discovery.sh@63 -- # sort -n 00:26:48.418 09:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.418 09:01:05 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:48.418 09:01:05 -- common/autotest_common.sh@904 -- # return 0 00:26:48.418 09:01:05 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:48.418 09:01:05 -- host/discovery.sh@79 -- # expected_count=0 00:26:48.418 09:01:05 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:48.418 09:01:05 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:48.418 09:01:05 -- common/autotest_common.sh@901 -- # local max=10 00:26:48.418 09:01:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:48.418 09:01:05 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:48.418 09:01:05 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:48.418 09:01:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:48.418 09:01:05 -- host/discovery.sh@74 -- # jq '. | length' 00:26:48.418 09:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.418 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.418 09:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.418 09:01:05 -- host/discovery.sh@74 -- # notification_count=0 00:26:48.418 09:01:05 -- host/discovery.sh@75 -- # notify_id=2 00:26:48.418 09:01:05 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:48.418 09:01:05 -- common/autotest_common.sh@904 -- # return 0 00:26:48.418 09:01:05 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:48.418 09:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.418 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.418 [2024-04-26 09:01:05.640904] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:48.418 [2024-04-26 09:01:05.640924] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:48.418 09:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.418 09:01:05 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.418 09:01:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.418 09:01:05 -- common/autotest_common.sh@901 -- # local max=10 00:26:48.418 09:01:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:48.418 09:01:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:48.418 09:01:05 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:48.418 [2024-04-26 09:01:05.649830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.418 [2024-04-26 09:01:05.649851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.418 [2024-04-26 09:01:05.649862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.418 [2024-04-26 09:01:05.649872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.418 [2024-04-26 09:01:05.649882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.418 [2024-04-26 09:01:05.649891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.418 [2024-04-26 09:01:05.649901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.418 [2024-04-26 09:01:05.649910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.418 [2024-04-26 09:01:05.649919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a940 is same with the state(5) to be set 00:26:48.418 09:01:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.418 09:01:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.418 09:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.418 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.419 09:01:05 -- host/discovery.sh@59 -- # sort 00:26:48.419 09:01:05 -- host/discovery.sh@59 -- # xargs 00:26:48.419 [2024-04-26 09:01:05.659846] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a940 (9): Bad file descriptor 00:26:48.677 09:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.677 [2024-04-26 09:01:05.669882] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:48.677 [2024-04-26 09:01:05.670151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.677 [2024-04-26 09:01:05.670594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.677 [2024-04-26 09:01:05.670609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a940 with addr=10.0.0.2, port=4420 00:26:48.677 [2024-04-26 09:01:05.670620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a940 is same with the state(5) to be set 00:26:48.677 [2024-04-26 09:01:05.670634] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a940 (9): Bad file descriptor 00:26:48.677 [2024-04-26 09:01:05.670655] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:48.677 [2024-04-26 09:01:05.670664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:48.677 [2024-04-26 09:01:05.670674] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:48.677 [2024-04-26 09:01:05.670687] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.677 [2024-04-26 09:01:05.679937] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:48.677 [2024-04-26 09:01:05.680200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.677 [2024-04-26 09:01:05.680702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.677 [2024-04-26 09:01:05.680716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a940 with addr=10.0.0.2, port=4420 00:26:48.677 [2024-04-26 09:01:05.680726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a940 is same with the state(5) to be set 00:26:48.677 [2024-04-26 09:01:05.680739] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a940 (9): Bad file descriptor 00:26:48.677 [2024-04-26 09:01:05.680765] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:48.677 [2024-04-26 09:01:05.680775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:48.677 [2024-04-26 09:01:05.680784] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:48.677 [2024-04-26 09:01:05.680796] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.677 [2024-04-26 09:01:05.689992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:48.677 [2024-04-26 09:01:05.690502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.677 [2024-04-26 09:01:05.690908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.677 [2024-04-26 09:01:05.690922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a940 with addr=10.0.0.2, port=4420 00:26:48.677 [2024-04-26 09:01:05.690932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a940 is same with the state(5) to be set 00:26:48.677 [2024-04-26 09:01:05.690945] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a940 (9): Bad file descriptor 00:26:48.677 [2024-04-26 09:01:05.690965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:48.677 [2024-04-26 09:01:05.690974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:48.677 [2024-04-26 09:01:05.690983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:48.677 [2024-04-26 09:01:05.690995] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.677 09:01:05 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.677 09:01:05 -- common/autotest_common.sh@904 -- # return 0 00:26:48.677 09:01:05 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:48.677 09:01:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:48.677 09:01:05 -- common/autotest_common.sh@901 -- # local max=10 00:26:48.677 09:01:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:48.678 [2024-04-26 09:01:05.700047] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:48.678 [2024-04-26 09:01:05.700505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.678 [2024-04-26 09:01:05.700948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.678 [2024-04-26 09:01:05.700961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a940 with addr=10.0.0.2, port=4420 00:26:48.678 [2024-04-26 09:01:05.700971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a940 is same with the state(5) to be set 00:26:48.678 [2024-04-26 09:01:05.700984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a940 (9): Bad file descriptor 00:26:48.678 [2024-04-26 09:01:05.701011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:48.678 [2024-04-26 09:01:05.701021] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:48.678 [2024-04-26 09:01:05.701030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:48.678 [2024-04-26 09:01:05.701042] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.678 09:01:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.678 09:01:05 -- host/discovery.sh@55 -- # xargs 00:26:48.678 09:01:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.678 09:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.678 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.678 09:01:05 -- host/discovery.sh@55 -- # sort 00:26:48.678 [2024-04-26 09:01:05.710100] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:48.678 [2024-04-26 09:01:05.710600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.678 [2024-04-26 09:01:05.710960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.678 [2024-04-26 09:01:05.710973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a940 with addr=10.0.0.2, port=4420 00:26:48.678 [2024-04-26 09:01:05.710983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a940 is same with the state(5) to be set 00:26:48.678 [2024-04-26 09:01:05.710997] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a940 (9): Bad file descriptor 00:26:48.678 [2024-04-26 09:01:05.711018] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:48.678 [2024-04-26 09:01:05.711028] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:48.678 [2024-04-26 09:01:05.711037] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:48.678 [2024-04-26 09:01:05.711049] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.678 [2024-04-26 09:01:05.720155] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:48.678 [2024-04-26 09:01:05.720664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.678 [2024-04-26 09:01:05.721088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.678 [2024-04-26 09:01:05.721101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf1a940 with addr=10.0.0.2, port=4420 00:26:48.678 [2024-04-26 09:01:05.721111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1a940 is same with the state(5) to be set 00:26:48.678 [2024-04-26 09:01:05.721124] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1a940 (9): Bad file descriptor 00:26:48.678 [2024-04-26 09:01:05.721154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:48.678 [2024-04-26 09:01:05.721164] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:48.678 [2024-04-26 09:01:05.721173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:48.678 [2024-04-26 09:01:05.721185] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.678 [2024-04-26 09:01:05.727164] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:48.678 [2024-04-26 09:01:05.727179] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:48.678 09:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:48.678 09:01:05 -- common/autotest_common.sh@904 -- # return 0 00:26:48.678 09:01:05 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:48.678 09:01:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:48.678 09:01:05 -- common/autotest_common.sh@901 -- # local max=10 00:26:48.678 09:01:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:26:48.678 09:01:05 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:48.678 09:01:05 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:48.678 09:01:05 -- host/discovery.sh@63 -- # xargs 00:26:48.678 09:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.678 09:01:05 -- host/discovery.sh@63 -- # sort -n 00:26:48.678 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.678 09:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:26:48.678 09:01:05 -- common/autotest_common.sh@904 -- # return 0 00:26:48.678 09:01:05 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:48.678 09:01:05 -- host/discovery.sh@79 -- # expected_count=0 00:26:48.678 09:01:05 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:48.678 09:01:05 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:48.678 09:01:05 -- common/autotest_common.sh@901 -- # local max=10 00:26:48.678 09:01:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:48.678 09:01:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:48.678 09:01:05 -- host/discovery.sh@74 -- # jq '. | length' 00:26:48.678 09:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.678 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.678 09:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.678 09:01:05 -- host/discovery.sh@74 -- # notification_count=0 00:26:48.678 09:01:05 -- host/discovery.sh@75 -- # notify_id=2 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:48.678 09:01:05 -- common/autotest_common.sh@904 -- # return 0 00:26:48.678 09:01:05 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:48.678 09:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.678 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.678 09:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.678 09:01:05 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:48.678 09:01:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:48.678 09:01:05 -- common/autotest_common.sh@901 -- # local max=10 00:26:48.678 09:01:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:26:48.678 09:01:05 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.678 09:01:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.678 09:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.678 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.678 09:01:05 -- host/discovery.sh@59 -- # sort 00:26:48.678 09:01:05 -- host/discovery.sh@59 -- # xargs 00:26:48.678 09:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:26:48.678 09:01:05 -- common/autotest_common.sh@904 -- # return 0 00:26:48.678 09:01:05 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:48.678 09:01:05 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:48.678 09:01:05 -- common/autotest_common.sh@901 -- # local max=10 00:26:48.678 09:01:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:48.678 09:01:05 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:48.679 09:01:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.679 09:01:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.679 09:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.679 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.679 09:01:05 -- host/discovery.sh@55 -- # sort 00:26:48.679 09:01:05 -- host/discovery.sh@55 -- # xargs 00:26:48.937 09:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.937 09:01:05 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:26:48.937 09:01:05 -- common/autotest_common.sh@904 -- # return 0 00:26:48.937 09:01:05 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:48.937 09:01:05 -- host/discovery.sh@79 -- # expected_count=2 00:26:48.937 09:01:05 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:48.937 09:01:05 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:48.937 09:01:05 -- common/autotest_common.sh@901 -- # local max=10 00:26:48.937 09:01:05 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:48.937 09:01:05 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:48.937 09:01:05 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:48.937 09:01:05 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:48.937 09:01:05 -- host/discovery.sh@74 -- # jq '. | length' 00:26:48.937 09:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.937 09:01:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.937 09:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.937 09:01:06 -- host/discovery.sh@74 -- # notification_count=2 00:26:48.937 09:01:06 -- host/discovery.sh@75 -- # notify_id=4 00:26:48.937 09:01:06 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:48.937 09:01:06 -- common/autotest_common.sh@904 -- # return 0 00:26:48.937 09:01:06 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:48.937 09:01:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.937 09:01:06 -- common/autotest_common.sh@10 -- # set +x 00:26:49.872 [2024-04-26 09:01:07.063671] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:49.872 [2024-04-26 09:01:07.063687] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:49.872 [2024-04-26 09:01:07.063700] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:50.132 [2024-04-26 09:01:07.152973] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:50.132 [2024-04-26 09:01:07.220715] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:50.132 [2024-04-26 09:01:07.220740] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:50.132 09:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.132 09:01:07 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.132 09:01:07 -- common/autotest_common.sh@638 -- # local es=0 00:26:50.132 09:01:07 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.132 09:01:07 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:50.132 09:01:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:50.132 09:01:07 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:50.132 09:01:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:50.132 09:01:07 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.132 09:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.132 09:01:07 -- common/autotest_common.sh@10 -- # set +x 00:26:50.132 request: 00:26:50.132 { 00:26:50.132 "name": "nvme", 00:26:50.132 "trtype": "tcp", 00:26:50.132 "traddr": "10.0.0.2", 00:26:50.132 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:50.132 "adrfam": "ipv4", 00:26:50.132 "trsvcid": "8009", 00:26:50.132 "wait_for_attach": true, 00:26:50.132 "method": "bdev_nvme_start_discovery", 00:26:50.132 "req_id": 1 00:26:50.132 } 00:26:50.132 Got JSON-RPC error response 00:26:50.132 response: 00:26:50.132 { 00:26:50.132 "code": -17, 00:26:50.132 "message": "File exists" 00:26:50.132 } 00:26:50.132 09:01:07 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:50.132 09:01:07 -- common/autotest_common.sh@641 -- # es=1 00:26:50.132 09:01:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:50.132 09:01:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:50.132 09:01:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:50.132 09:01:07 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:50.132 09:01:07 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:50.132 09:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.132 09:01:07 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:50.133 09:01:07 -- common/autotest_common.sh@10 -- # set +x 00:26:50.133 09:01:07 -- host/discovery.sh@67 -- # sort 00:26:50.133 09:01:07 -- host/discovery.sh@67 -- # xargs 00:26:50.133 09:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.133 09:01:07 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:50.133 09:01:07 -- host/discovery.sh@146 -- # get_bdev_list 00:26:50.133 09:01:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.133 09:01:07 -- host/discovery.sh@55 -- # xargs 00:26:50.133 09:01:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:50.133 09:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.133 09:01:07 -- common/autotest_common.sh@10 -- # set +x 00:26:50.133 09:01:07 -- host/discovery.sh@55 -- # sort 00:26:50.133 09:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.133 09:01:07 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:50.133 09:01:07 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.133 09:01:07 -- common/autotest_common.sh@638 -- # local es=0 00:26:50.133 09:01:07 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.133 09:01:07 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:50.133 09:01:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:50.133 09:01:07 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:50.133 09:01:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:50.133 09:01:07 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.133 09:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.133 09:01:07 -- common/autotest_common.sh@10 -- # set +x 00:26:50.133 request: 00:26:50.133 { 00:26:50.133 "name": "nvme_second", 00:26:50.133 "trtype": "tcp", 00:26:50.133 "traddr": "10.0.0.2", 00:26:50.133 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:50.133 "adrfam": "ipv4", 00:26:50.133 "trsvcid": "8009", 00:26:50.133 "wait_for_attach": true, 00:26:50.133 "method": "bdev_nvme_start_discovery", 00:26:50.133 "req_id": 1 00:26:50.133 } 00:26:50.133 Got JSON-RPC error response 00:26:50.133 response: 00:26:50.133 { 00:26:50.133 "code": -17, 00:26:50.133 "message": "File exists" 00:26:50.133 } 00:26:50.133 09:01:07 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:50.133 09:01:07 -- common/autotest_common.sh@641 -- # es=1 00:26:50.133 09:01:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:50.133 09:01:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:50.133 09:01:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:50.133 09:01:07 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:50.133 09:01:07 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:50.133 09:01:07 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:50.133 09:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.133 09:01:07 -- common/autotest_common.sh@10 -- # set +x 00:26:50.133 09:01:07 -- host/discovery.sh@67 -- # sort 00:26:50.133 09:01:07 -- host/discovery.sh@67 -- # xargs 00:26:50.133 09:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.392 09:01:07 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:50.392 09:01:07 -- host/discovery.sh@152 -- # get_bdev_list 00:26:50.392 09:01:07 -- host/discovery.sh@55 -- # sort 00:26:50.392 09:01:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.392 09:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.392 09:01:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:50.392 09:01:07 -- common/autotest_common.sh@10 -- # set +x 00:26:50.392 09:01:07 -- host/discovery.sh@55 -- # xargs 00:26:50.392 09:01:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.392 09:01:07 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:50.392 09:01:07 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:50.392 09:01:07 -- common/autotest_common.sh@638 -- # local es=0 00:26:50.392 09:01:07 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:50.392 09:01:07 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:50.392 09:01:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:50.392 09:01:07 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:50.392 09:01:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:50.392 09:01:07 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:50.392 09:01:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.392 09:01:07 -- common/autotest_common.sh@10 -- # set +x 00:26:51.327 [2024-04-26 09:01:08.476495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.327 [2024-04-26 09:01:08.476944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.327 [2024-04-26 09:01:08.476959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf34a10 with addr=10.0.0.2, port=8010 00:26:51.327 [2024-04-26 09:01:08.476972] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:51.327 [2024-04-26 09:01:08.476982] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:51.327 [2024-04-26 09:01:08.476990] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:52.261 [2024-04-26 09:01:09.478829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-04-26 09:01:09.479287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.261 [2024-04-26 09:01:09.479300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf34a10 with addr=10.0.0.2, port=8010 00:26:52.261 [2024-04-26 09:01:09.479314] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:52.261 [2024-04-26 09:01:09.479323] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:52.261 [2024-04-26 09:01:09.479331] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:53.634 [2024-04-26 09:01:10.480791] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:53.634 request: 00:26:53.634 { 00:26:53.634 "name": "nvme_second", 00:26:53.634 "trtype": "tcp", 00:26:53.634 "traddr": "10.0.0.2", 00:26:53.634 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:53.634 "adrfam": "ipv4", 00:26:53.634 "trsvcid": "8010", 00:26:53.634 "attach_timeout_ms": 3000, 00:26:53.634 "method": "bdev_nvme_start_discovery", 00:26:53.634 "req_id": 1 00:26:53.634 } 00:26:53.634 Got JSON-RPC error response 00:26:53.634 response: 00:26:53.634 { 00:26:53.634 "code": -110, 00:26:53.634 "message": "Connection timed out" 00:26:53.634 } 00:26:53.634 09:01:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:53.634 09:01:10 -- common/autotest_common.sh@641 -- # es=1 00:26:53.634 09:01:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:53.634 09:01:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:53.634 09:01:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:53.634 09:01:10 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:53.634 09:01:10 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:53.634 09:01:10 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:53.634 09:01:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.634 09:01:10 -- host/discovery.sh@67 -- # sort 00:26:53.634 09:01:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.634 09:01:10 -- host/discovery.sh@67 -- # xargs 00:26:53.634 09:01:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.634 09:01:10 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:53.634 09:01:10 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:53.634 09:01:10 -- host/discovery.sh@161 -- # kill 2188320 00:26:53.634 09:01:10 -- host/discovery.sh@162 -- # nvmftestfini 00:26:53.634 09:01:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:53.634 09:01:10 -- nvmf/common.sh@117 -- # sync 00:26:53.634 09:01:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:53.634 09:01:10 -- nvmf/common.sh@120 -- # set +e 00:26:53.634 09:01:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:53.634 09:01:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:53.634 rmmod nvme_tcp 00:26:53.634 rmmod nvme_fabrics 00:26:53.634 rmmod nvme_keyring 00:26:53.634 09:01:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:53.634 09:01:10 -- nvmf/common.sh@124 -- # set -e 00:26:53.634 09:01:10 -- nvmf/common.sh@125 -- # return 0 00:26:53.634 09:01:10 -- nvmf/common.sh@478 -- # '[' -n 2188164 ']' 00:26:53.634 09:01:10 -- nvmf/common.sh@479 -- # killprocess 2188164 00:26:53.634 09:01:10 -- common/autotest_common.sh@936 -- # '[' -z 2188164 ']' 00:26:53.634 09:01:10 -- common/autotest_common.sh@940 -- # kill -0 2188164 00:26:53.634 09:01:10 -- common/autotest_common.sh@941 -- # uname 00:26:53.634 09:01:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:53.634 09:01:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2188164 00:26:53.634 09:01:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:53.634 09:01:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:53.634 09:01:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2188164' 00:26:53.634 killing process with pid 2188164 00:26:53.634 09:01:10 -- common/autotest_common.sh@955 -- # kill 2188164 00:26:53.634 09:01:10 -- common/autotest_common.sh@960 -- # wait 2188164 00:26:53.634 09:01:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:53.634 09:01:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:53.634 09:01:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:53.634 09:01:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:53.634 09:01:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:53.634 09:01:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.634 09:01:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:53.634 09:01:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.169 09:01:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:56.169 00:26:56.169 real 0m19.215s 00:26:56.169 user 0m22.257s 00:26:56.169 sys 0m7.186s 00:26:56.169 09:01:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:56.169 09:01:12 -- common/autotest_common.sh@10 -- # set +x 00:26:56.169 ************************************ 00:26:56.169 END TEST nvmf_discovery 00:26:56.169 ************************************ 00:26:56.169 09:01:12 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:56.169 09:01:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:56.169 09:01:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:56.169 09:01:12 -- common/autotest_common.sh@10 -- # set +x 00:26:56.169 ************************************ 00:26:56.169 START TEST nvmf_discovery_remove_ifc 00:26:56.169 ************************************ 00:26:56.169 09:01:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:56.169 * Looking for test storage... 00:26:56.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:56.169 09:01:13 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.169 09:01:13 -- nvmf/common.sh@7 -- # uname -s 00:26:56.169 09:01:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.169 09:01:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.169 09:01:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.169 09:01:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.169 09:01:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.169 09:01:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.169 09:01:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.169 09:01:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.169 09:01:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.169 09:01:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.169 09:01:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:56.169 09:01:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:56.169 09:01:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.169 09:01:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.169 09:01:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.169 09:01:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.169 09:01:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.169 09:01:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.169 09:01:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.169 09:01:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.169 09:01:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.169 09:01:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.169 09:01:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.169 09:01:13 -- paths/export.sh@5 -- # export PATH 00:26:56.169 09:01:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.169 09:01:13 -- nvmf/common.sh@47 -- # : 0 00:26:56.169 09:01:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:56.169 09:01:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:56.169 09:01:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.169 09:01:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.169 09:01:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.169 09:01:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:56.169 09:01:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:56.169 09:01:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:56.169 09:01:13 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:56.169 09:01:13 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:56.169 09:01:13 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:56.169 09:01:13 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:56.169 09:01:13 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:56.169 09:01:13 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:56.169 09:01:13 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:56.169 09:01:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:56.169 09:01:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.169 09:01:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:56.169 09:01:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:56.169 09:01:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:56.169 09:01:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.169 09:01:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.169 09:01:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.169 09:01:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:56.169 09:01:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:56.169 09:01:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:56.169 09:01:13 -- common/autotest_common.sh@10 -- # set +x 00:27:02.774 09:01:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:02.774 09:01:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:02.774 09:01:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:02.774 09:01:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:02.774 09:01:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:02.774 09:01:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:02.774 09:01:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:02.774 09:01:19 -- nvmf/common.sh@295 -- # net_devs=() 00:27:02.774 09:01:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:02.774 09:01:19 -- nvmf/common.sh@296 -- # e810=() 00:27:02.774 09:01:19 -- nvmf/common.sh@296 -- # local -ga e810 00:27:02.774 09:01:19 -- nvmf/common.sh@297 -- # x722=() 00:27:02.774 09:01:19 -- nvmf/common.sh@297 -- # local -ga x722 00:27:02.774 09:01:19 -- nvmf/common.sh@298 -- # mlx=() 00:27:02.774 09:01:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:02.774 09:01:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.774 09:01:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.774 09:01:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.774 09:01:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.774 09:01:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.774 09:01:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.774 09:01:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.774 09:01:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.774 09:01:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.774 09:01:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.774 09:01:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.774 09:01:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:02.774 09:01:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:02.774 09:01:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:02.774 09:01:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.774 09:01:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:02.774 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:02.774 09:01:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.774 09:01:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:02.774 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:02.774 09:01:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:02.774 09:01:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.774 09:01:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.774 09:01:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:02.774 09:01:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.774 09:01:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:02.774 Found net devices under 0000:af:00.0: cvl_0_0 00:27:02.774 09:01:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.774 09:01:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.774 09:01:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.774 09:01:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:02.774 09:01:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.774 09:01:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:02.774 Found net devices under 0000:af:00.1: cvl_0_1 00:27:02.774 09:01:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.774 09:01:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:02.774 09:01:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:02.774 09:01:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:02.774 09:01:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:02.774 09:01:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.774 09:01:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.774 09:01:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.774 09:01:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:02.774 09:01:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.774 09:01:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.774 09:01:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:02.774 09:01:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.774 09:01:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.774 09:01:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:02.774 09:01:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:02.774 09:01:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.774 09:01:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.774 09:01:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.774 09:01:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.774 09:01:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:02.774 09:01:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.774 09:01:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:03.033 09:01:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:03.033 09:01:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:03.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:27:03.033 00:27:03.033 --- 10.0.0.2 ping statistics --- 00:27:03.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.033 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:27:03.033 09:01:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:27:03.033 00:27:03.033 --- 10.0.0.1 ping statistics --- 00:27:03.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.033 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:27:03.033 09:01:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.033 09:01:20 -- nvmf/common.sh@411 -- # return 0 00:27:03.033 09:01:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:03.033 09:01:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.033 09:01:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:03.033 09:01:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:03.033 09:01:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.033 09:01:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:03.033 09:01:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:03.033 09:01:20 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:03.033 09:01:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:03.033 09:01:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:03.033 09:01:20 -- common/autotest_common.sh@10 -- # set +x 00:27:03.033 09:01:20 -- nvmf/common.sh@470 -- # nvmfpid=2193667 00:27:03.034 09:01:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:03.034 09:01:20 -- nvmf/common.sh@471 -- # waitforlisten 2193667 00:27:03.034 09:01:20 -- common/autotest_common.sh@817 -- # '[' -z 2193667 ']' 00:27:03.034 09:01:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.034 09:01:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:03.034 09:01:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.034 09:01:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:03.034 09:01:20 -- common/autotest_common.sh@10 -- # set +x 00:27:03.034 [2024-04-26 09:01:20.148781] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:27:03.034 [2024-04-26 09:01:20.148828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.034 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.034 [2024-04-26 09:01:20.229387] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.293 [2024-04-26 09:01:20.319760] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.293 [2024-04-26 09:01:20.319793] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.293 [2024-04-26 09:01:20.319803] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.293 [2024-04-26 09:01:20.319811] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.293 [2024-04-26 09:01:20.319836] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.293 [2024-04-26 09:01:20.319854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.862 09:01:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:03.862 09:01:20 -- common/autotest_common.sh@850 -- # return 0 00:27:03.862 09:01:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:03.862 09:01:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:03.862 09:01:20 -- common/autotest_common.sh@10 -- # set +x 00:27:03.862 09:01:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:03.862 09:01:21 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:03.862 09:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:03.862 09:01:21 -- common/autotest_common.sh@10 -- # set +x 00:27:03.862 [2024-04-26 09:01:21.026038] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:03.862 [2024-04-26 09:01:21.034186] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:03.862 null0 00:27:03.862 [2024-04-26 09:01:21.066208] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.862 09:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:03.862 09:01:21 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2193942 00:27:03.862 09:01:21 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:03.862 09:01:21 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2193942 /tmp/host.sock 00:27:03.862 09:01:21 -- common/autotest_common.sh@817 -- # '[' -z 2193942 ']' 00:27:03.862 09:01:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:27:03.862 09:01:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:03.862 09:01:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:03.862 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:03.862 09:01:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:03.862 09:01:21 -- common/autotest_common.sh@10 -- # set +x 00:27:04.122 [2024-04-26 09:01:21.134950] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:27:04.122 [2024-04-26 09:01:21.135001] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193942 ] 00:27:04.122 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.122 [2024-04-26 09:01:21.202399] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.122 [2024-04-26 09:01:21.274637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.691 09:01:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:04.691 09:01:21 -- common/autotest_common.sh@850 -- # return 0 00:27:04.691 09:01:21 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:04.691 09:01:21 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:04.691 09:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:04.691 09:01:21 -- common/autotest_common.sh@10 -- # set +x 00:27:04.691 09:01:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:04.691 09:01:21 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:04.691 09:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:04.691 09:01:21 -- common/autotest_common.sh@10 -- # set +x 00:27:04.950 09:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:04.950 09:01:22 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:04.950 09:01:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:04.950 09:01:22 -- common/autotest_common.sh@10 -- # set +x 00:27:05.889 [2024-04-26 09:01:23.025110] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:05.889 [2024-04-26 09:01:23.025133] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:05.889 [2024-04-26 09:01:23.025149] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:06.149 [2024-04-26 09:01:23.155537] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:06.149 [2024-04-26 09:01:23.381600] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:06.149 [2024-04-26 09:01:23.381646] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:06.149 [2024-04-26 09:01:23.381667] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:06.149 [2024-04-26 09:01:23.381683] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:06.149 [2024-04-26 09:01:23.381704] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:06.149 09:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:06.149 09:01:23 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:06.149 09:01:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.149 [2024-04-26 09:01:23.385697] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a4d8b0 was disconnected and freed. delete nvme_qpair. 00:27:06.149 09:01:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.149 09:01:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.149 09:01:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.149 09:01:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.149 09:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:06.149 09:01:23 -- common/autotest_common.sh@10 -- # set +x 00:27:06.408 09:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:06.408 09:01:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:06.408 09:01:23 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:06.408 09:01:23 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:06.408 09:01:23 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:06.408 09:01:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.408 09:01:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.408 09:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:06.408 09:01:23 -- common/autotest_common.sh@10 -- # set +x 00:27:06.408 09:01:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.408 09:01:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.408 09:01:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.408 09:01:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:06.408 09:01:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:06.408 09:01:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.785 09:01:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.785 09:01:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.785 09:01:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:07.785 09:01:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.785 09:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.785 09:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:07.785 09:01:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.785 09:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.785 09:01:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:07.785 09:01:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.723 09:01:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.723 09:01:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.723 09:01:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.723 09:01:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.723 09:01:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:08.723 09:01:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.723 09:01:25 -- common/autotest_common.sh@10 -- # set +x 00:27:08.723 09:01:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:08.723 09:01:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:08.723 09:01:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:09.661 09:01:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.661 09:01:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.661 09:01:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.661 09:01:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.661 09:01:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:09.661 09:01:26 -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 09:01:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:09.661 09:01:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:09.661 09:01:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:09.661 09:01:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:10.598 09:01:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:10.598 09:01:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.598 09:01:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:10.598 09:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:10.598 09:01:27 -- common/autotest_common.sh@10 -- # set +x 00:27:10.598 09:01:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:10.598 09:01:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:10.598 09:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:10.598 09:01:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:10.598 09:01:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:11.577 [2024-04-26 09:01:28.822519] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:11.577 [2024-04-26 09:01:28.822564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.577 [2024-04-26 09:01:28.822581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.577 [2024-04-26 09:01:28.822593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.577 [2024-04-26 09:01:28.822602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.577 [2024-04-26 09:01:28.822612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.577 [2024-04-26 09:01:28.822622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.577 [2024-04-26 09:01:28.822632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.577 [2024-04-26 09:01:28.822641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.577 [2024-04-26 09:01:28.822651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:11.577 [2024-04-26 09:01:28.822661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.577 [2024-04-26 09:01:28.822670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13b50 is same with the state(5) to be set 00:27:11.836 [2024-04-26 09:01:28.832538] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a13b50 (9): Bad file descriptor 00:27:11.836 09:01:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:11.836 09:01:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.836 09:01:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:11.836 09:01:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:11.836 09:01:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:11.836 09:01:28 -- common/autotest_common.sh@10 -- # set +x 00:27:11.836 09:01:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:11.836 [2024-04-26 09:01:28.842576] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:12.776 [2024-04-26 09:01:29.872470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:13.714 [2024-04-26 09:01:30.896535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:13.714 [2024-04-26 09:01:30.896594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a13b50 with addr=10.0.0.2, port=4420 00:27:13.714 [2024-04-26 09:01:30.896616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a13b50 is same with the state(5) to be set 00:27:13.714 [2024-04-26 09:01:30.896738] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a13b50 (9): Bad file descriptor 00:27:13.714 [2024-04-26 09:01:30.896771] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.714 [2024-04-26 09:01:30.896800] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:13.714 [2024-04-26 09:01:30.896830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.714 [2024-04-26 09:01:30.896846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.714 [2024-04-26 09:01:30.896863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.714 [2024-04-26 09:01:30.896877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.714 [2024-04-26 09:01:30.896890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.714 [2024-04-26 09:01:30.896908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.714 [2024-04-26 09:01:30.896921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.714 [2024-04-26 09:01:30.896934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.714 [2024-04-26 09:01:30.896948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:13.714 [2024-04-26 09:01:30.896961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.714 [2024-04-26 09:01:30.896974] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:13.714 [2024-04-26 09:01:30.897648] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a13f60 (9): Bad file descriptor 00:27:13.714 [2024-04-26 09:01:30.898663] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:13.714 [2024-04-26 09:01:30.898680] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:13.714 09:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:13.714 09:01:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:13.714 09:01:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:15.093 09:01:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:15.093 09:01:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.093 09:01:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:15.093 09:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.093 09:01:31 -- common/autotest_common.sh@10 -- # set +x 00:27:15.093 09:01:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:15.093 09:01:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:15.093 09:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.093 09:01:31 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:15.093 09:01:31 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.093 09:01:31 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.093 09:01:32 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:15.093 09:01:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:15.093 09:01:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.093 09:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:15.093 09:01:32 -- common/autotest_common.sh@10 -- # set +x 00:27:15.093 09:01:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:15.093 09:01:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:15.093 09:01:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:15.093 09:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:15.093 09:01:32 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:15.093 09:01:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:16.032 [2024-04-26 09:01:32.952689] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:16.032 [2024-04-26 09:01:32.952708] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:16.032 [2024-04-26 09:01:32.952724] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:16.032 [2024-04-26 09:01:33.039990] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:16.032 [2024-04-26 09:01:33.102776] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:16.032 [2024-04-26 09:01:33.102811] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:16.032 [2024-04-26 09:01:33.102831] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:16.032 [2024-04-26 09:01:33.102845] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:16.032 [2024-04-26 09:01:33.102857] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:16.032 [2024-04-26 09:01:33.109944] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a57d40 was disconnected and freed. delete nvme_qpair. 00:27:16.032 09:01:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:16.033 09:01:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:16.033 09:01:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.033 09:01:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:16.033 09:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:16.033 09:01:33 -- common/autotest_common.sh@10 -- # set +x 00:27:16.033 09:01:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:16.033 09:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:16.033 09:01:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:16.033 09:01:33 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:16.033 09:01:33 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2193942 00:27:16.033 09:01:33 -- common/autotest_common.sh@936 -- # '[' -z 2193942 ']' 00:27:16.033 09:01:33 -- common/autotest_common.sh@940 -- # kill -0 2193942 00:27:16.033 09:01:33 -- common/autotest_common.sh@941 -- # uname 00:27:16.033 09:01:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:16.033 09:01:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2193942 00:27:16.033 09:01:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:16.033 09:01:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:16.033 09:01:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2193942' 00:27:16.033 killing process with pid 2193942 00:27:16.033 09:01:33 -- common/autotest_common.sh@955 -- # kill 2193942 00:27:16.033 09:01:33 -- common/autotest_common.sh@960 -- # wait 2193942 00:27:16.291 09:01:33 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:16.291 09:01:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:16.291 09:01:33 -- nvmf/common.sh@117 -- # sync 00:27:16.291 09:01:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.291 09:01:33 -- nvmf/common.sh@120 -- # set +e 00:27:16.291 09:01:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.291 09:01:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.291 rmmod nvme_tcp 00:27:16.291 rmmod nvme_fabrics 00:27:16.291 rmmod nvme_keyring 00:27:16.291 09:01:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.291 09:01:33 -- nvmf/common.sh@124 -- # set -e 00:27:16.291 09:01:33 -- nvmf/common.sh@125 -- # return 0 00:27:16.291 09:01:33 -- nvmf/common.sh@478 -- # '[' -n 2193667 ']' 00:27:16.291 09:01:33 -- nvmf/common.sh@479 -- # killprocess 2193667 00:27:16.291 09:01:33 -- common/autotest_common.sh@936 -- # '[' -z 2193667 ']' 00:27:16.291 09:01:33 -- common/autotest_common.sh@940 -- # kill -0 2193667 00:27:16.291 09:01:33 -- common/autotest_common.sh@941 -- # uname 00:27:16.291 09:01:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:16.291 09:01:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2193667 00:27:16.550 09:01:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:16.550 09:01:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:16.550 09:01:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2193667' 00:27:16.550 killing process with pid 2193667 00:27:16.550 09:01:33 -- common/autotest_common.sh@955 -- # kill 2193667 00:27:16.550 09:01:33 -- common/autotest_common.sh@960 -- # wait 2193667 00:27:16.550 09:01:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:16.551 09:01:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:16.551 09:01:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:16.551 09:01:33 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:16.551 09:01:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:16.551 09:01:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.551 09:01:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.551 09:01:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.089 09:01:35 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:19.089 00:27:19.089 real 0m22.681s 00:27:19.089 user 0m25.660s 00:27:19.089 sys 0m7.139s 00:27:19.089 09:01:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:19.089 09:01:35 -- common/autotest_common.sh@10 -- # set +x 00:27:19.089 ************************************ 00:27:19.089 END TEST nvmf_discovery_remove_ifc 00:27:19.089 ************************************ 00:27:19.089 09:01:35 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:19.089 09:01:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:19.089 09:01:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:19.089 09:01:35 -- common/autotest_common.sh@10 -- # set +x 00:27:19.089 ************************************ 00:27:19.089 START TEST nvmf_identify_kernel_target 00:27:19.089 ************************************ 00:27:19.089 09:01:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:19.089 * Looking for test storage... 00:27:19.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.089 09:01:36 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.089 09:01:36 -- nvmf/common.sh@7 -- # uname -s 00:27:19.089 09:01:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.089 09:01:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.089 09:01:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.089 09:01:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.089 09:01:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.089 09:01:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.089 09:01:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.089 09:01:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.089 09:01:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.089 09:01:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.089 09:01:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:19.089 09:01:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:19.089 09:01:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.089 09:01:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.089 09:01:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.089 09:01:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.089 09:01:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.089 09:01:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.089 09:01:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.089 09:01:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.089 09:01:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.089 09:01:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.089 09:01:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.089 09:01:36 -- paths/export.sh@5 -- # export PATH 00:27:19.089 09:01:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.089 09:01:36 -- nvmf/common.sh@47 -- # : 0 00:27:19.089 09:01:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:19.089 09:01:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:19.089 09:01:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.089 09:01:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.089 09:01:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.089 09:01:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:19.089 09:01:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:19.089 09:01:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:19.089 09:01:36 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:19.089 09:01:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:19.089 09:01:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.089 09:01:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:19.089 09:01:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:19.089 09:01:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:19.089 09:01:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.089 09:01:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.089 09:01:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.090 09:01:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:19.090 09:01:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:19.090 09:01:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.090 09:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:25.694 09:01:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:25.694 09:01:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:25.694 09:01:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:25.694 09:01:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:25.694 09:01:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:25.694 09:01:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:25.694 09:01:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:25.694 09:01:42 -- nvmf/common.sh@295 -- # net_devs=() 00:27:25.694 09:01:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:25.694 09:01:42 -- nvmf/common.sh@296 -- # e810=() 00:27:25.694 09:01:42 -- nvmf/common.sh@296 -- # local -ga e810 00:27:25.694 09:01:42 -- nvmf/common.sh@297 -- # x722=() 00:27:25.694 09:01:42 -- nvmf/common.sh@297 -- # local -ga x722 00:27:25.694 09:01:42 -- nvmf/common.sh@298 -- # mlx=() 00:27:25.694 09:01:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:25.694 09:01:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.694 09:01:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.694 09:01:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.694 09:01:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.694 09:01:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.694 09:01:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.694 09:01:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.694 09:01:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.694 09:01:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.694 09:01:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.694 09:01:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.694 09:01:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:25.694 09:01:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:25.694 09:01:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:25.694 09:01:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.694 09:01:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:25.694 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:25.694 09:01:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:25.694 09:01:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:25.694 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:25.694 09:01:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:25.694 09:01:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.694 09:01:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.694 09:01:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:25.694 09:01:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.694 09:01:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:25.694 Found net devices under 0000:af:00.0: cvl_0_0 00:27:25.694 09:01:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.694 09:01:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:25.694 09:01:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.694 09:01:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:25.694 09:01:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.694 09:01:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:25.694 Found net devices under 0000:af:00.1: cvl_0_1 00:27:25.694 09:01:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.694 09:01:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:25.694 09:01:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:25.694 09:01:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:25.694 09:01:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:25.694 09:01:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.694 09:01:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.694 09:01:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.694 09:01:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:25.694 09:01:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.694 09:01:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.694 09:01:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:25.694 09:01:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.694 09:01:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.694 09:01:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:25.694 09:01:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:25.694 09:01:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.695 09:01:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.695 09:01:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.695 09:01:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.695 09:01:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:25.695 09:01:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.695 09:01:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.695 09:01:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.695 09:01:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:25.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:27:25.695 00:27:25.695 --- 10.0.0.2 ping statistics --- 00:27:25.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.695 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:27:25.695 09:01:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:27:25.695 00:27:25.695 --- 10.0.0.1 ping statistics --- 00:27:25.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.695 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:27:25.695 09:01:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.695 09:01:42 -- nvmf/common.sh@411 -- # return 0 00:27:25.695 09:01:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:25.695 09:01:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.695 09:01:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:25.695 09:01:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:25.695 09:01:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.695 09:01:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:25.695 09:01:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:25.695 09:01:42 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:25.695 09:01:42 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:25.695 09:01:42 -- nvmf/common.sh@717 -- # local ip 00:27:25.695 09:01:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:25.695 09:01:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:25.695 09:01:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.695 09:01:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.695 09:01:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:25.695 09:01:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.695 09:01:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:25.695 09:01:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:25.695 09:01:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:25.695 09:01:42 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:25.695 09:01:42 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:25.695 09:01:42 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:25.695 09:01:42 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:27:25.695 09:01:42 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.695 09:01:42 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.695 09:01:42 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:25.695 09:01:42 -- nvmf/common.sh@628 -- # local block nvme 00:27:25.695 09:01:42 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:27:25.695 09:01:42 -- nvmf/common.sh@631 -- # modprobe nvmet 00:27:25.695 09:01:42 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:25.695 09:01:42 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:28.982 Waiting for block devices as requested 00:27:28.982 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:28.983 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:28.983 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:28.983 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:29.241 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:29.241 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:29.241 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:29.241 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:29.500 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:29.500 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:29.500 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:29.761 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:29.761 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:29.761 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:30.019 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:30.019 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:30.019 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:27:30.278 09:01:47 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:30.278 09:01:47 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:30.278 09:01:47 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:27:30.278 09:01:47 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:30.278 09:01:47 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:30.278 09:01:47 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:30.278 09:01:47 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:27:30.278 09:01:47 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:30.278 09:01:47 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:30.278 No valid GPT data, bailing 00:27:30.278 09:01:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:30.278 09:01:47 -- scripts/common.sh@391 -- # pt= 00:27:30.278 09:01:47 -- scripts/common.sh@392 -- # return 1 00:27:30.278 09:01:47 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:27:30.278 09:01:47 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:27:30.278 09:01:47 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:30.278 09:01:47 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:30.278 09:01:47 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:30.278 09:01:47 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:30.278 09:01:47 -- nvmf/common.sh@656 -- # echo 1 00:27:30.278 09:01:47 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:27:30.278 09:01:47 -- nvmf/common.sh@658 -- # echo 1 00:27:30.278 09:01:47 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:27:30.278 09:01:47 -- nvmf/common.sh@661 -- # echo tcp 00:27:30.278 09:01:47 -- nvmf/common.sh@662 -- # echo 4420 00:27:30.278 09:01:47 -- nvmf/common.sh@663 -- # echo ipv4 00:27:30.278 09:01:47 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:30.278 09:01:47 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:27:30.539 00:27:30.539 Discovery Log Number of Records 2, Generation counter 2 00:27:30.539 =====Discovery Log Entry 0====== 00:27:30.539 trtype: tcp 00:27:30.539 adrfam: ipv4 00:27:30.539 subtype: current discovery subsystem 00:27:30.539 treq: not specified, sq flow control disable supported 00:27:30.539 portid: 1 00:27:30.539 trsvcid: 4420 00:27:30.539 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:30.539 traddr: 10.0.0.1 00:27:30.539 eflags: none 00:27:30.539 sectype: none 00:27:30.539 =====Discovery Log Entry 1====== 00:27:30.539 trtype: tcp 00:27:30.539 adrfam: ipv4 00:27:30.539 subtype: nvme subsystem 00:27:30.539 treq: not specified, sq flow control disable supported 00:27:30.539 portid: 1 00:27:30.539 trsvcid: 4420 00:27:30.539 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:30.539 traddr: 10.0.0.1 00:27:30.539 eflags: none 00:27:30.539 sectype: none 00:27:30.539 09:01:47 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:30.539 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:30.539 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.539 ===================================================== 00:27:30.539 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:30.539 ===================================================== 00:27:30.539 Controller Capabilities/Features 00:27:30.539 ================================ 00:27:30.539 Vendor ID: 0000 00:27:30.539 Subsystem Vendor ID: 0000 00:27:30.539 Serial Number: c87ea81854aeda33c283 00:27:30.539 Model Number: Linux 00:27:30.539 Firmware Version: 6.7.0-68 00:27:30.539 Recommended Arb Burst: 0 00:27:30.539 IEEE OUI Identifier: 00 00 00 00:27:30.539 Multi-path I/O 00:27:30.539 May have multiple subsystem ports: No 00:27:30.539 May have multiple controllers: No 00:27:30.539 Associated with SR-IOV VF: No 00:27:30.539 Max Data Transfer Size: Unlimited 00:27:30.539 Max Number of Namespaces: 0 00:27:30.539 Max Number of I/O Queues: 1024 00:27:30.539 NVMe Specification Version (VS): 1.3 00:27:30.539 NVMe Specification Version (Identify): 1.3 00:27:30.539 Maximum Queue Entries: 1024 00:27:30.540 Contiguous Queues Required: No 00:27:30.540 Arbitration Mechanisms Supported 00:27:30.540 Weighted Round Robin: Not Supported 00:27:30.540 Vendor Specific: Not Supported 00:27:30.540 Reset Timeout: 7500 ms 00:27:30.540 Doorbell Stride: 4 bytes 00:27:30.540 NVM Subsystem Reset: Not Supported 00:27:30.540 Command Sets Supported 00:27:30.540 NVM Command Set: Supported 00:27:30.540 Boot Partition: Not Supported 00:27:30.540 Memory Page Size Minimum: 4096 bytes 00:27:30.540 Memory Page Size Maximum: 4096 bytes 00:27:30.540 Persistent Memory Region: Not Supported 00:27:30.540 Optional Asynchronous Events Supported 00:27:30.540 Namespace Attribute Notices: Not Supported 00:27:30.540 Firmware Activation Notices: Not Supported 00:27:30.540 ANA Change Notices: Not Supported 00:27:30.540 PLE Aggregate Log Change Notices: Not Supported 00:27:30.540 LBA Status Info Alert Notices: Not Supported 00:27:30.540 EGE Aggregate Log Change Notices: Not Supported 00:27:30.540 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.540 Zone Descriptor Change Notices: Not Supported 00:27:30.540 Discovery Log Change Notices: Supported 00:27:30.540 Controller Attributes 00:27:30.540 128-bit Host Identifier: Not Supported 00:27:30.540 Non-Operational Permissive Mode: Not Supported 00:27:30.540 NVM Sets: Not Supported 00:27:30.540 Read Recovery Levels: Not Supported 00:27:30.540 Endurance Groups: Not Supported 00:27:30.540 Predictable Latency Mode: Not Supported 00:27:30.540 Traffic Based Keep ALive: Not Supported 00:27:30.540 Namespace Granularity: Not Supported 00:27:30.540 SQ Associations: Not Supported 00:27:30.540 UUID List: Not Supported 00:27:30.540 Multi-Domain Subsystem: Not Supported 00:27:30.540 Fixed Capacity Management: Not Supported 00:27:30.540 Variable Capacity Management: Not Supported 00:27:30.540 Delete Endurance Group: Not Supported 00:27:30.540 Delete NVM Set: Not Supported 00:27:30.540 Extended LBA Formats Supported: Not Supported 00:27:30.540 Flexible Data Placement Supported: Not Supported 00:27:30.540 00:27:30.540 Controller Memory Buffer Support 00:27:30.540 ================================ 00:27:30.540 Supported: No 00:27:30.540 00:27:30.540 Persistent Memory Region Support 00:27:30.540 ================================ 00:27:30.540 Supported: No 00:27:30.540 00:27:30.540 Admin Command Set Attributes 00:27:30.540 ============================ 00:27:30.540 Security Send/Receive: Not Supported 00:27:30.540 Format NVM: Not Supported 00:27:30.540 Firmware Activate/Download: Not Supported 00:27:30.540 Namespace Management: Not Supported 00:27:30.540 Device Self-Test: Not Supported 00:27:30.540 Directives: Not Supported 00:27:30.540 NVMe-MI: Not Supported 00:27:30.540 Virtualization Management: Not Supported 00:27:30.540 Doorbell Buffer Config: Not Supported 00:27:30.540 Get LBA Status Capability: Not Supported 00:27:30.540 Command & Feature Lockdown Capability: Not Supported 00:27:30.540 Abort Command Limit: 1 00:27:30.540 Async Event Request Limit: 1 00:27:30.540 Number of Firmware Slots: N/A 00:27:30.540 Firmware Slot 1 Read-Only: N/A 00:27:30.540 Firmware Activation Without Reset: N/A 00:27:30.540 Multiple Update Detection Support: N/A 00:27:30.540 Firmware Update Granularity: No Information Provided 00:27:30.540 Per-Namespace SMART Log: No 00:27:30.540 Asymmetric Namespace Access Log Page: Not Supported 00:27:30.540 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:30.540 Command Effects Log Page: Not Supported 00:27:30.540 Get Log Page Extended Data: Supported 00:27:30.540 Telemetry Log Pages: Not Supported 00:27:30.540 Persistent Event Log Pages: Not Supported 00:27:30.540 Supported Log Pages Log Page: May Support 00:27:30.540 Commands Supported & Effects Log Page: Not Supported 00:27:30.540 Feature Identifiers & Effects Log Page:May Support 00:27:30.540 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.540 Data Area 4 for Telemetry Log: Not Supported 00:27:30.540 Error Log Page Entries Supported: 1 00:27:30.540 Keep Alive: Not Supported 00:27:30.540 00:27:30.540 NVM Command Set Attributes 00:27:30.540 ========================== 00:27:30.540 Submission Queue Entry Size 00:27:30.540 Max: 1 00:27:30.540 Min: 1 00:27:30.540 Completion Queue Entry Size 00:27:30.540 Max: 1 00:27:30.540 Min: 1 00:27:30.540 Number of Namespaces: 0 00:27:30.540 Compare Command: Not Supported 00:27:30.540 Write Uncorrectable Command: Not Supported 00:27:30.540 Dataset Management Command: Not Supported 00:27:30.540 Write Zeroes Command: Not Supported 00:27:30.540 Set Features Save Field: Not Supported 00:27:30.540 Reservations: Not Supported 00:27:30.540 Timestamp: Not Supported 00:27:30.540 Copy: Not Supported 00:27:30.540 Volatile Write Cache: Not Present 00:27:30.540 Atomic Write Unit (Normal): 1 00:27:30.540 Atomic Write Unit (PFail): 1 00:27:30.540 Atomic Compare & Write Unit: 1 00:27:30.540 Fused Compare & Write: Not Supported 00:27:30.540 Scatter-Gather List 00:27:30.540 SGL Command Set: Supported 00:27:30.540 SGL Keyed: Not Supported 00:27:30.540 SGL Bit Bucket Descriptor: Not Supported 00:27:30.540 SGL Metadata Pointer: Not Supported 00:27:30.540 Oversized SGL: Not Supported 00:27:30.540 SGL Metadata Address: Not Supported 00:27:30.540 SGL Offset: Supported 00:27:30.540 Transport SGL Data Block: Not Supported 00:27:30.540 Replay Protected Memory Block: Not Supported 00:27:30.540 00:27:30.540 Firmware Slot Information 00:27:30.540 ========================= 00:27:30.540 Active slot: 0 00:27:30.540 00:27:30.540 00:27:30.540 Error Log 00:27:30.540 ========= 00:27:30.540 00:27:30.540 Active Namespaces 00:27:30.540 ================= 00:27:30.540 Discovery Log Page 00:27:30.540 ================== 00:27:30.540 Generation Counter: 2 00:27:30.540 Number of Records: 2 00:27:30.540 Record Format: 0 00:27:30.540 00:27:30.540 Discovery Log Entry 0 00:27:30.540 ---------------------- 00:27:30.540 Transport Type: 3 (TCP) 00:27:30.540 Address Family: 1 (IPv4) 00:27:30.540 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:30.540 Entry Flags: 00:27:30.540 Duplicate Returned Information: 0 00:27:30.540 Explicit Persistent Connection Support for Discovery: 0 00:27:30.540 Transport Requirements: 00:27:30.540 Secure Channel: Not Specified 00:27:30.540 Port ID: 1 (0x0001) 00:27:30.540 Controller ID: 65535 (0xffff) 00:27:30.540 Admin Max SQ Size: 32 00:27:30.540 Transport Service Identifier: 4420 00:27:30.540 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:30.540 Transport Address: 10.0.0.1 00:27:30.540 Discovery Log Entry 1 00:27:30.540 ---------------------- 00:27:30.540 Transport Type: 3 (TCP) 00:27:30.540 Address Family: 1 (IPv4) 00:27:30.540 Subsystem Type: 2 (NVM Subsystem) 00:27:30.540 Entry Flags: 00:27:30.540 Duplicate Returned Information: 0 00:27:30.540 Explicit Persistent Connection Support for Discovery: 0 00:27:30.540 Transport Requirements: 00:27:30.540 Secure Channel: Not Specified 00:27:30.540 Port ID: 1 (0x0001) 00:27:30.540 Controller ID: 65535 (0xffff) 00:27:30.540 Admin Max SQ Size: 32 00:27:30.540 Transport Service Identifier: 4420 00:27:30.540 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:30.540 Transport Address: 10.0.0.1 00:27:30.540 09:01:47 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:30.540 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.540 get_feature(0x01) failed 00:27:30.540 get_feature(0x02) failed 00:27:30.540 get_feature(0x04) failed 00:27:30.540 ===================================================== 00:27:30.540 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:30.540 ===================================================== 00:27:30.540 Controller Capabilities/Features 00:27:30.540 ================================ 00:27:30.540 Vendor ID: 0000 00:27:30.540 Subsystem Vendor ID: 0000 00:27:30.540 Serial Number: 0ed5831b0526e7584e56 00:27:30.540 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:30.540 Firmware Version: 6.7.0-68 00:27:30.540 Recommended Arb Burst: 6 00:27:30.540 IEEE OUI Identifier: 00 00 00 00:27:30.540 Multi-path I/O 00:27:30.540 May have multiple subsystem ports: Yes 00:27:30.540 May have multiple controllers: Yes 00:27:30.540 Associated with SR-IOV VF: No 00:27:30.540 Max Data Transfer Size: Unlimited 00:27:30.540 Max Number of Namespaces: 1024 00:27:30.540 Max Number of I/O Queues: 128 00:27:30.540 NVMe Specification Version (VS): 1.3 00:27:30.541 NVMe Specification Version (Identify): 1.3 00:27:30.541 Maximum Queue Entries: 1024 00:27:30.541 Contiguous Queues Required: No 00:27:30.541 Arbitration Mechanisms Supported 00:27:30.541 Weighted Round Robin: Not Supported 00:27:30.541 Vendor Specific: Not Supported 00:27:30.541 Reset Timeout: 7500 ms 00:27:30.541 Doorbell Stride: 4 bytes 00:27:30.541 NVM Subsystem Reset: Not Supported 00:27:30.541 Command Sets Supported 00:27:30.541 NVM Command Set: Supported 00:27:30.541 Boot Partition: Not Supported 00:27:30.541 Memory Page Size Minimum: 4096 bytes 00:27:30.541 Memory Page Size Maximum: 4096 bytes 00:27:30.541 Persistent Memory Region: Not Supported 00:27:30.541 Optional Asynchronous Events Supported 00:27:30.541 Namespace Attribute Notices: Supported 00:27:30.541 Firmware Activation Notices: Not Supported 00:27:30.541 ANA Change Notices: Supported 00:27:30.541 PLE Aggregate Log Change Notices: Not Supported 00:27:30.541 LBA Status Info Alert Notices: Not Supported 00:27:30.541 EGE Aggregate Log Change Notices: Not Supported 00:27:30.541 Normal NVM Subsystem Shutdown event: Not Supported 00:27:30.541 Zone Descriptor Change Notices: Not Supported 00:27:30.541 Discovery Log Change Notices: Not Supported 00:27:30.541 Controller Attributes 00:27:30.541 128-bit Host Identifier: Supported 00:27:30.541 Non-Operational Permissive Mode: Not Supported 00:27:30.541 NVM Sets: Not Supported 00:27:30.541 Read Recovery Levels: Not Supported 00:27:30.541 Endurance Groups: Not Supported 00:27:30.541 Predictable Latency Mode: Not Supported 00:27:30.541 Traffic Based Keep ALive: Supported 00:27:30.541 Namespace Granularity: Not Supported 00:27:30.541 SQ Associations: Not Supported 00:27:30.541 UUID List: Not Supported 00:27:30.541 Multi-Domain Subsystem: Not Supported 00:27:30.541 Fixed Capacity Management: Not Supported 00:27:30.541 Variable Capacity Management: Not Supported 00:27:30.541 Delete Endurance Group: Not Supported 00:27:30.541 Delete NVM Set: Not Supported 00:27:30.541 Extended LBA Formats Supported: Not Supported 00:27:30.541 Flexible Data Placement Supported: Not Supported 00:27:30.541 00:27:30.541 Controller Memory Buffer Support 00:27:30.541 ================================ 00:27:30.541 Supported: No 00:27:30.541 00:27:30.541 Persistent Memory Region Support 00:27:30.541 ================================ 00:27:30.541 Supported: No 00:27:30.541 00:27:30.541 Admin Command Set Attributes 00:27:30.541 ============================ 00:27:30.541 Security Send/Receive: Not Supported 00:27:30.541 Format NVM: Not Supported 00:27:30.541 Firmware Activate/Download: Not Supported 00:27:30.541 Namespace Management: Not Supported 00:27:30.541 Device Self-Test: Not Supported 00:27:30.541 Directives: Not Supported 00:27:30.541 NVMe-MI: Not Supported 00:27:30.541 Virtualization Management: Not Supported 00:27:30.541 Doorbell Buffer Config: Not Supported 00:27:30.541 Get LBA Status Capability: Not Supported 00:27:30.541 Command & Feature Lockdown Capability: Not Supported 00:27:30.541 Abort Command Limit: 4 00:27:30.541 Async Event Request Limit: 4 00:27:30.541 Number of Firmware Slots: N/A 00:27:30.541 Firmware Slot 1 Read-Only: N/A 00:27:30.541 Firmware Activation Without Reset: N/A 00:27:30.541 Multiple Update Detection Support: N/A 00:27:30.541 Firmware Update Granularity: No Information Provided 00:27:30.541 Per-Namespace SMART Log: Yes 00:27:30.541 Asymmetric Namespace Access Log Page: Supported 00:27:30.541 ANA Transition Time : 10 sec 00:27:30.541 00:27:30.541 Asymmetric Namespace Access Capabilities 00:27:30.541 ANA Optimized State : Supported 00:27:30.541 ANA Non-Optimized State : Supported 00:27:30.541 ANA Inaccessible State : Supported 00:27:30.541 ANA Persistent Loss State : Supported 00:27:30.541 ANA Change State : Supported 00:27:30.541 ANAGRPID is not changed : No 00:27:30.541 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:30.541 00:27:30.541 ANA Group Identifier Maximum : 128 00:27:30.541 Number of ANA Group Identifiers : 128 00:27:30.541 Max Number of Allowed Namespaces : 1024 00:27:30.541 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:30.541 Command Effects Log Page: Supported 00:27:30.541 Get Log Page Extended Data: Supported 00:27:30.541 Telemetry Log Pages: Not Supported 00:27:30.541 Persistent Event Log Pages: Not Supported 00:27:30.541 Supported Log Pages Log Page: May Support 00:27:30.541 Commands Supported & Effects Log Page: Not Supported 00:27:30.541 Feature Identifiers & Effects Log Page:May Support 00:27:30.541 NVMe-MI Commands & Effects Log Page: May Support 00:27:30.541 Data Area 4 for Telemetry Log: Not Supported 00:27:30.541 Error Log Page Entries Supported: 128 00:27:30.541 Keep Alive: Supported 00:27:30.541 Keep Alive Granularity: 1000 ms 00:27:30.541 00:27:30.541 NVM Command Set Attributes 00:27:30.541 ========================== 00:27:30.541 Submission Queue Entry Size 00:27:30.541 Max: 64 00:27:30.541 Min: 64 00:27:30.541 Completion Queue Entry Size 00:27:30.541 Max: 16 00:27:30.541 Min: 16 00:27:30.541 Number of Namespaces: 1024 00:27:30.541 Compare Command: Not Supported 00:27:30.541 Write Uncorrectable Command: Not Supported 00:27:30.541 Dataset Management Command: Supported 00:27:30.541 Write Zeroes Command: Supported 00:27:30.541 Set Features Save Field: Not Supported 00:27:30.541 Reservations: Not Supported 00:27:30.541 Timestamp: Not Supported 00:27:30.541 Copy: Not Supported 00:27:30.541 Volatile Write Cache: Present 00:27:30.541 Atomic Write Unit (Normal): 1 00:27:30.541 Atomic Write Unit (PFail): 1 00:27:30.541 Atomic Compare & Write Unit: 1 00:27:30.541 Fused Compare & Write: Not Supported 00:27:30.541 Scatter-Gather List 00:27:30.541 SGL Command Set: Supported 00:27:30.541 SGL Keyed: Not Supported 00:27:30.541 SGL Bit Bucket Descriptor: Not Supported 00:27:30.541 SGL Metadata Pointer: Not Supported 00:27:30.541 Oversized SGL: Not Supported 00:27:30.541 SGL Metadata Address: Not Supported 00:27:30.541 SGL Offset: Supported 00:27:30.541 Transport SGL Data Block: Not Supported 00:27:30.541 Replay Protected Memory Block: Not Supported 00:27:30.541 00:27:30.541 Firmware Slot Information 00:27:30.541 ========================= 00:27:30.541 Active slot: 0 00:27:30.541 00:27:30.541 Asymmetric Namespace Access 00:27:30.541 =========================== 00:27:30.541 Change Count : 0 00:27:30.541 Number of ANA Group Descriptors : 1 00:27:30.541 ANA Group Descriptor : 0 00:27:30.541 ANA Group ID : 1 00:27:30.541 Number of NSID Values : 1 00:27:30.541 Change Count : 0 00:27:30.541 ANA State : 1 00:27:30.541 Namespace Identifier : 1 00:27:30.541 00:27:30.541 Commands Supported and Effects 00:27:30.541 ============================== 00:27:30.541 Admin Commands 00:27:30.541 -------------- 00:27:30.541 Get Log Page (02h): Supported 00:27:30.541 Identify (06h): Supported 00:27:30.541 Abort (08h): Supported 00:27:30.541 Set Features (09h): Supported 00:27:30.541 Get Features (0Ah): Supported 00:27:30.541 Asynchronous Event Request (0Ch): Supported 00:27:30.541 Keep Alive (18h): Supported 00:27:30.541 I/O Commands 00:27:30.541 ------------ 00:27:30.541 Flush (00h): Supported 00:27:30.541 Write (01h): Supported LBA-Change 00:27:30.541 Read (02h): Supported 00:27:30.541 Write Zeroes (08h): Supported LBA-Change 00:27:30.541 Dataset Management (09h): Supported 00:27:30.541 00:27:30.541 Error Log 00:27:30.541 ========= 00:27:30.541 Entry: 0 00:27:30.541 Error Count: 0x3 00:27:30.541 Submission Queue Id: 0x0 00:27:30.541 Command Id: 0x5 00:27:30.541 Phase Bit: 0 00:27:30.541 Status Code: 0x2 00:27:30.541 Status Code Type: 0x0 00:27:30.541 Do Not Retry: 1 00:27:30.541 Error Location: 0x28 00:27:30.541 LBA: 0x0 00:27:30.541 Namespace: 0x0 00:27:30.541 Vendor Log Page: 0x0 00:27:30.541 ----------- 00:27:30.541 Entry: 1 00:27:30.541 Error Count: 0x2 00:27:30.541 Submission Queue Id: 0x0 00:27:30.541 Command Id: 0x5 00:27:30.541 Phase Bit: 0 00:27:30.541 Status Code: 0x2 00:27:30.541 Status Code Type: 0x0 00:27:30.541 Do Not Retry: 1 00:27:30.541 Error Location: 0x28 00:27:30.541 LBA: 0x0 00:27:30.541 Namespace: 0x0 00:27:30.541 Vendor Log Page: 0x0 00:27:30.541 ----------- 00:27:30.541 Entry: 2 00:27:30.541 Error Count: 0x1 00:27:30.541 Submission Queue Id: 0x0 00:27:30.541 Command Id: 0x4 00:27:30.541 Phase Bit: 0 00:27:30.541 Status Code: 0x2 00:27:30.541 Status Code Type: 0x0 00:27:30.541 Do Not Retry: 1 00:27:30.541 Error Location: 0x28 00:27:30.541 LBA: 0x0 00:27:30.541 Namespace: 0x0 00:27:30.541 Vendor Log Page: 0x0 00:27:30.542 00:27:30.542 Number of Queues 00:27:30.542 ================ 00:27:30.542 Number of I/O Submission Queues: 128 00:27:30.542 Number of I/O Completion Queues: 128 00:27:30.542 00:27:30.542 ZNS Specific Controller Data 00:27:30.542 ============================ 00:27:30.542 Zone Append Size Limit: 0 00:27:30.542 00:27:30.542 00:27:30.542 Active Namespaces 00:27:30.542 ================= 00:27:30.542 get_feature(0x05) failed 00:27:30.542 Namespace ID:1 00:27:30.542 Command Set Identifier: NVM (00h) 00:27:30.542 Deallocate: Supported 00:27:30.542 Deallocated/Unwritten Error: Not Supported 00:27:30.542 Deallocated Read Value: Unknown 00:27:30.542 Deallocate in Write Zeroes: Not Supported 00:27:30.542 Deallocated Guard Field: 0xFFFF 00:27:30.542 Flush: Supported 00:27:30.542 Reservation: Not Supported 00:27:30.542 Namespace Sharing Capabilities: Multiple Controllers 00:27:30.542 Size (in LBAs): 3125627568 (1490GiB) 00:27:30.542 Capacity (in LBAs): 3125627568 (1490GiB) 00:27:30.542 Utilization (in LBAs): 3125627568 (1490GiB) 00:27:30.542 UUID: a6cd8847-e78f-40b8-9e52-03bfb1da15c8 00:27:30.542 Thin Provisioning: Not Supported 00:27:30.542 Per-NS Atomic Units: Yes 00:27:30.542 Atomic Boundary Size (Normal): 0 00:27:30.542 Atomic Boundary Size (PFail): 0 00:27:30.542 Atomic Boundary Offset: 0 00:27:30.542 NGUID/EUI64 Never Reused: No 00:27:30.542 ANA group ID: 1 00:27:30.542 Namespace Write Protected: No 00:27:30.542 Number of LBA Formats: 1 00:27:30.542 Current LBA Format: LBA Format #00 00:27:30.542 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:30.542 00:27:30.542 09:01:47 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:30.542 09:01:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:30.542 09:01:47 -- nvmf/common.sh@117 -- # sync 00:27:30.542 09:01:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:30.542 09:01:47 -- nvmf/common.sh@120 -- # set +e 00:27:30.542 09:01:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.542 09:01:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:30.542 rmmod nvme_tcp 00:27:30.542 rmmod nvme_fabrics 00:27:30.542 09:01:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.542 09:01:47 -- nvmf/common.sh@124 -- # set -e 00:27:30.542 09:01:47 -- nvmf/common.sh@125 -- # return 0 00:27:30.542 09:01:47 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:27:30.542 09:01:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:30.542 09:01:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:30.542 09:01:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:30.542 09:01:47 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.542 09:01:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:30.542 09:01:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.542 09:01:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:30.542 09:01:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.076 09:01:49 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:33.076 09:01:49 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:33.076 09:01:49 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:33.076 09:01:49 -- nvmf/common.sh@675 -- # echo 0 00:27:33.076 09:01:49 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:33.076 09:01:49 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:33.076 09:01:49 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:33.076 09:01:49 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:33.076 09:01:49 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:27:33.076 09:01:49 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:27:33.076 09:01:49 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:35.609 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:35.609 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:35.868 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:37.771 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:27:37.771 00:27:37.771 real 0m18.672s 00:27:37.771 user 0m4.237s 00:27:37.771 sys 0m9.839s 00:27:37.771 09:01:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:37.771 09:01:54 -- common/autotest_common.sh@10 -- # set +x 00:27:37.771 ************************************ 00:27:37.771 END TEST nvmf_identify_kernel_target 00:27:37.771 ************************************ 00:27:37.771 09:01:54 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:37.771 09:01:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:37.771 09:01:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:37.771 09:01:54 -- common/autotest_common.sh@10 -- # set +x 00:27:37.771 ************************************ 00:27:37.771 START TEST nvmf_auth 00:27:37.771 ************************************ 00:27:37.771 09:01:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:38.030 * Looking for test storage... 00:27:38.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.030 09:01:55 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.030 09:01:55 -- nvmf/common.sh@7 -- # uname -s 00:27:38.030 09:01:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.030 09:01:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.030 09:01:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.030 09:01:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.030 09:01:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.030 09:01:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.030 09:01:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.030 09:01:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.030 09:01:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.030 09:01:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.030 09:01:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:38.030 09:01:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:38.030 09:01:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.030 09:01:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.030 09:01:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.030 09:01:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.030 09:01:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.030 09:01:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.030 09:01:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.030 09:01:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.030 09:01:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.030 09:01:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.030 09:01:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.030 09:01:55 -- paths/export.sh@5 -- # export PATH 00:27:38.030 09:01:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.030 09:01:55 -- nvmf/common.sh@47 -- # : 0 00:27:38.030 09:01:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.030 09:01:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.030 09:01:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.030 09:01:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.030 09:01:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.030 09:01:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.030 09:01:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.030 09:01:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.030 09:01:55 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:38.030 09:01:55 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:38.030 09:01:55 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:38.030 09:01:55 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:38.030 09:01:55 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:38.030 09:01:55 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:38.030 09:01:55 -- host/auth.sh@21 -- # keys=() 00:27:38.030 09:01:55 -- host/auth.sh@77 -- # nvmftestinit 00:27:38.030 09:01:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:38.030 09:01:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.030 09:01:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:38.030 09:01:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:38.030 09:01:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:38.030 09:01:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.030 09:01:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:38.031 09:01:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.031 09:01:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:27:38.031 09:01:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:27:38.031 09:01:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.031 09:01:55 -- common/autotest_common.sh@10 -- # set +x 00:27:44.592 09:02:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:44.592 09:02:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:44.592 09:02:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:44.592 09:02:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:44.592 09:02:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:44.592 09:02:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:44.592 09:02:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:44.592 09:02:01 -- nvmf/common.sh@295 -- # net_devs=() 00:27:44.592 09:02:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:44.592 09:02:01 -- nvmf/common.sh@296 -- # e810=() 00:27:44.592 09:02:01 -- nvmf/common.sh@296 -- # local -ga e810 00:27:44.592 09:02:01 -- nvmf/common.sh@297 -- # x722=() 00:27:44.592 09:02:01 -- nvmf/common.sh@297 -- # local -ga x722 00:27:44.592 09:02:01 -- nvmf/common.sh@298 -- # mlx=() 00:27:44.592 09:02:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:44.592 09:02:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.592 09:02:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.592 09:02:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.592 09:02:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.592 09:02:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.592 09:02:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.592 09:02:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.592 09:02:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.592 09:02:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.592 09:02:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.592 09:02:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.592 09:02:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:44.592 09:02:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:44.592 09:02:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:44.592 09:02:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:44.592 09:02:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:44.592 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:44.592 09:02:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:44.592 09:02:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:44.592 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:44.592 09:02:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:44.592 09:02:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:44.592 09:02:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.592 09:02:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:44.592 09:02:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.592 09:02:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:44.592 Found net devices under 0000:af:00.0: cvl_0_0 00:27:44.592 09:02:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.592 09:02:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:44.592 09:02:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.592 09:02:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:44.592 09:02:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.592 09:02:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:44.592 Found net devices under 0000:af:00.1: cvl_0_1 00:27:44.592 09:02:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.592 09:02:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:44.592 09:02:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:44.592 09:02:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:44.592 09:02:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:44.592 09:02:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.592 09:02:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.592 09:02:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.592 09:02:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:44.592 09:02:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.592 09:02:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.592 09:02:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:44.592 09:02:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.592 09:02:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.592 09:02:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:44.592 09:02:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:44.592 09:02:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.592 09:02:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.592 09:02:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.592 09:02:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.592 09:02:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:44.592 09:02:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.592 09:02:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.592 09:02:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.592 09:02:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:44.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:27:44.592 00:27:44.592 --- 10.0.0.2 ping statistics --- 00:27:44.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.592 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:27:44.592 09:02:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:27:44.593 00:27:44.593 --- 10.0.0.1 ping statistics --- 00:27:44.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.593 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:27:44.593 09:02:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.593 09:02:01 -- nvmf/common.sh@411 -- # return 0 00:27:44.593 09:02:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:44.593 09:02:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.593 09:02:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:44.593 09:02:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:44.593 09:02:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.593 09:02:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:44.593 09:02:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:44.593 09:02:01 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:27:44.593 09:02:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:44.593 09:02:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:44.593 09:02:01 -- common/autotest_common.sh@10 -- # set +x 00:27:44.851 09:02:01 -- nvmf/common.sh@470 -- # nvmfpid=2206476 00:27:44.851 09:02:01 -- nvmf/common.sh@471 -- # waitforlisten 2206476 00:27:44.851 09:02:01 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:44.851 09:02:01 -- common/autotest_common.sh@817 -- # '[' -z 2206476 ']' 00:27:44.851 09:02:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.851 09:02:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:44.851 09:02:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.851 09:02:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:44.851 09:02:01 -- common/autotest_common.sh@10 -- # set +x 00:27:45.786 09:02:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:45.786 09:02:02 -- common/autotest_common.sh@850 -- # return 0 00:27:45.786 09:02:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:45.786 09:02:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:45.786 09:02:02 -- common/autotest_common.sh@10 -- # set +x 00:27:45.786 09:02:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.786 09:02:02 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:45.786 09:02:02 -- host/auth.sh@81 -- # gen_key null 32 00:27:45.786 09:02:02 -- host/auth.sh@53 -- # local digest len file key 00:27:45.786 09:02:02 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.786 09:02:02 -- host/auth.sh@54 -- # local -A digests 00:27:45.786 09:02:02 -- host/auth.sh@56 -- # digest=null 00:27:45.786 09:02:02 -- host/auth.sh@56 -- # len=32 00:27:45.786 09:02:02 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:45.786 09:02:02 -- host/auth.sh@57 -- # key=cee30840516466f7b26611d1f94e8f4f 00:27:45.786 09:02:02 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:27:45.787 09:02:02 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.vF3 00:27:45.787 09:02:02 -- host/auth.sh@59 -- # format_dhchap_key cee30840516466f7b26611d1f94e8f4f 0 00:27:45.787 09:02:02 -- nvmf/common.sh@708 -- # format_key DHHC-1 cee30840516466f7b26611d1f94e8f4f 0 00:27:45.787 09:02:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # key=cee30840516466f7b26611d1f94e8f4f 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # digest=0 00:27:45.787 09:02:02 -- nvmf/common.sh@694 -- # python - 00:27:45.787 09:02:02 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.vF3 00:27:45.787 09:02:02 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.vF3 00:27:45.787 09:02:02 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.vF3 00:27:45.787 09:02:02 -- host/auth.sh@82 -- # gen_key null 48 00:27:45.787 09:02:02 -- host/auth.sh@53 -- # local digest len file key 00:27:45.787 09:02:02 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.787 09:02:02 -- host/auth.sh@54 -- # local -A digests 00:27:45.787 09:02:02 -- host/auth.sh@56 -- # digest=null 00:27:45.787 09:02:02 -- host/auth.sh@56 -- # len=48 00:27:45.787 09:02:02 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:45.787 09:02:02 -- host/auth.sh@57 -- # key=3fc700540e5f639e64a91999b1408fcbe31a4bfbcb7b4da5 00:27:45.787 09:02:02 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:27:45.787 09:02:02 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.oDn 00:27:45.787 09:02:02 -- host/auth.sh@59 -- # format_dhchap_key 3fc700540e5f639e64a91999b1408fcbe31a4bfbcb7b4da5 0 00:27:45.787 09:02:02 -- nvmf/common.sh@708 -- # format_key DHHC-1 3fc700540e5f639e64a91999b1408fcbe31a4bfbcb7b4da5 0 00:27:45.787 09:02:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # key=3fc700540e5f639e64a91999b1408fcbe31a4bfbcb7b4da5 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # digest=0 00:27:45.787 09:02:02 -- nvmf/common.sh@694 -- # python - 00:27:45.787 09:02:02 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.oDn 00:27:45.787 09:02:02 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.oDn 00:27:45.787 09:02:02 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.oDn 00:27:45.787 09:02:02 -- host/auth.sh@83 -- # gen_key sha256 32 00:27:45.787 09:02:02 -- host/auth.sh@53 -- # local digest len file key 00:27:45.787 09:02:02 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.787 09:02:02 -- host/auth.sh@54 -- # local -A digests 00:27:45.787 09:02:02 -- host/auth.sh@56 -- # digest=sha256 00:27:45.787 09:02:02 -- host/auth.sh@56 -- # len=32 00:27:45.787 09:02:02 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:45.787 09:02:02 -- host/auth.sh@57 -- # key=ddda354622c16e39886fbfd3b835842d 00:27:45.787 09:02:02 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:27:45.787 09:02:02 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.6W9 00:27:45.787 09:02:02 -- host/auth.sh@59 -- # format_dhchap_key ddda354622c16e39886fbfd3b835842d 1 00:27:45.787 09:02:02 -- nvmf/common.sh@708 -- # format_key DHHC-1 ddda354622c16e39886fbfd3b835842d 1 00:27:45.787 09:02:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # key=ddda354622c16e39886fbfd3b835842d 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # digest=1 00:27:45.787 09:02:02 -- nvmf/common.sh@694 -- # python - 00:27:45.787 09:02:02 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.6W9 00:27:45.787 09:02:02 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.6W9 00:27:45.787 09:02:02 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.6W9 00:27:45.787 09:02:02 -- host/auth.sh@84 -- # gen_key sha384 48 00:27:45.787 09:02:02 -- host/auth.sh@53 -- # local digest len file key 00:27:45.787 09:02:02 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.787 09:02:02 -- host/auth.sh@54 -- # local -A digests 00:27:45.787 09:02:02 -- host/auth.sh@56 -- # digest=sha384 00:27:45.787 09:02:02 -- host/auth.sh@56 -- # len=48 00:27:45.787 09:02:02 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:45.787 09:02:02 -- host/auth.sh@57 -- # key=b582ac32bdcea409678e55593a40ee97f588d4888b9cac46 00:27:45.787 09:02:02 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:27:45.787 09:02:02 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.zzI 00:27:45.787 09:02:02 -- host/auth.sh@59 -- # format_dhchap_key b582ac32bdcea409678e55593a40ee97f588d4888b9cac46 2 00:27:45.787 09:02:02 -- nvmf/common.sh@708 -- # format_key DHHC-1 b582ac32bdcea409678e55593a40ee97f588d4888b9cac46 2 00:27:45.787 09:02:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # key=b582ac32bdcea409678e55593a40ee97f588d4888b9cac46 00:27:45.787 09:02:02 -- nvmf/common.sh@693 -- # digest=2 00:27:45.787 09:02:02 -- nvmf/common.sh@694 -- # python - 00:27:45.787 09:02:02 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.zzI 00:27:45.787 09:02:02 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.zzI 00:27:45.787 09:02:02 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.zzI 00:27:45.787 09:02:02 -- host/auth.sh@85 -- # gen_key sha512 64 00:27:45.787 09:02:02 -- host/auth.sh@53 -- # local digest len file key 00:27:45.787 09:02:02 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:45.787 09:02:02 -- host/auth.sh@54 -- # local -A digests 00:27:45.787 09:02:02 -- host/auth.sh@56 -- # digest=sha512 00:27:45.787 09:02:02 -- host/auth.sh@56 -- # len=64 00:27:45.787 09:02:02 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:45.787 09:02:03 -- host/auth.sh@57 -- # key=669ac47835a6776287e0bc21897b626593848d873f9278c2d9e5f00d9f2e6dfa 00:27:45.787 09:02:03 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:27:45.787 09:02:03 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.Tpb 00:27:45.787 09:02:03 -- host/auth.sh@59 -- # format_dhchap_key 669ac47835a6776287e0bc21897b626593848d873f9278c2d9e5f00d9f2e6dfa 3 00:27:45.787 09:02:03 -- nvmf/common.sh@708 -- # format_key DHHC-1 669ac47835a6776287e0bc21897b626593848d873f9278c2d9e5f00d9f2e6dfa 3 00:27:45.787 09:02:03 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:45.787 09:02:03 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:45.787 09:02:03 -- nvmf/common.sh@693 -- # key=669ac47835a6776287e0bc21897b626593848d873f9278c2d9e5f00d9f2e6dfa 00:27:45.787 09:02:03 -- nvmf/common.sh@693 -- # digest=3 00:27:45.787 09:02:03 -- nvmf/common.sh@694 -- # python - 00:27:46.046 09:02:03 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.Tpb 00:27:46.046 09:02:03 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.Tpb 00:27:46.046 09:02:03 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.Tpb 00:27:46.046 09:02:03 -- host/auth.sh@87 -- # waitforlisten 2206476 00:27:46.046 09:02:03 -- common/autotest_common.sh@817 -- # '[' -z 2206476 ']' 00:27:46.046 09:02:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.046 09:02:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:46.046 09:02:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.046 09:02:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:46.046 09:02:03 -- common/autotest_common.sh@10 -- # set +x 00:27:46.046 09:02:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:46.046 09:02:03 -- common/autotest_common.sh@850 -- # return 0 00:27:46.046 09:02:03 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:46.046 09:02:03 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vF3 00:27:46.046 09:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.046 09:02:03 -- common/autotest_common.sh@10 -- # set +x 00:27:46.046 09:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.046 09:02:03 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:46.046 09:02:03 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.oDn 00:27:46.046 09:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.046 09:02:03 -- common/autotest_common.sh@10 -- # set +x 00:27:46.046 09:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.046 09:02:03 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:46.046 09:02:03 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6W9 00:27:46.046 09:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.046 09:02:03 -- common/autotest_common.sh@10 -- # set +x 00:27:46.046 09:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.046 09:02:03 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:46.046 09:02:03 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.zzI 00:27:46.046 09:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.046 09:02:03 -- common/autotest_common.sh@10 -- # set +x 00:27:46.046 09:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.046 09:02:03 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:46.046 09:02:03 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Tpb 00:27:46.046 09:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.046 09:02:03 -- common/autotest_common.sh@10 -- # set +x 00:27:46.046 09:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.046 09:02:03 -- host/auth.sh@92 -- # nvmet_auth_init 00:27:46.046 09:02:03 -- host/auth.sh@35 -- # get_main_ns_ip 00:27:46.046 09:02:03 -- nvmf/common.sh@717 -- # local ip 00:27:46.046 09:02:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:46.046 09:02:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:46.046 09:02:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.046 09:02:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.046 09:02:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:46.046 09:02:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.046 09:02:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:46.046 09:02:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:46.046 09:02:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:46.046 09:02:03 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:46.046 09:02:03 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:46.046 09:02:03 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:27:46.046 09:02:03 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:46.046 09:02:03 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:46.046 09:02:03 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:46.046 09:02:03 -- nvmf/common.sh@628 -- # local block nvme 00:27:46.047 09:02:03 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:27:46.047 09:02:03 -- nvmf/common.sh@631 -- # modprobe nvmet 00:27:46.306 09:02:03 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:46.306 09:02:03 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:49.592 Waiting for block devices as requested 00:27:49.592 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:49.592 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:49.592 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:49.592 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:49.851 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:49.851 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:49.851 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:49.851 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:50.109 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:50.109 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:50.109 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:50.367 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:50.367 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:50.367 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:50.625 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:50.625 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:50.625 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:27:51.559 09:02:08 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:51.559 09:02:08 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:51.559 09:02:08 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:27:51.559 09:02:08 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:51.559 09:02:08 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:51.559 09:02:08 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:51.559 09:02:08 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:27:51.559 09:02:08 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:51.559 09:02:08 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:51.559 No valid GPT data, bailing 00:27:51.559 09:02:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:51.559 09:02:08 -- scripts/common.sh@391 -- # pt= 00:27:51.559 09:02:08 -- scripts/common.sh@392 -- # return 1 00:27:51.559 09:02:08 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:27:51.559 09:02:08 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:27:51.559 09:02:08 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:51.559 09:02:08 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:51.560 09:02:08 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:51.560 09:02:08 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:51.560 09:02:08 -- nvmf/common.sh@656 -- # echo 1 00:27:51.560 09:02:08 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:27:51.560 09:02:08 -- nvmf/common.sh@658 -- # echo 1 00:27:51.560 09:02:08 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:27:51.560 09:02:08 -- nvmf/common.sh@661 -- # echo tcp 00:27:51.560 09:02:08 -- nvmf/common.sh@662 -- # echo 4420 00:27:51.560 09:02:08 -- nvmf/common.sh@663 -- # echo ipv4 00:27:51.560 09:02:08 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:51.560 09:02:08 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:27:51.819 00:27:51.819 Discovery Log Number of Records 2, Generation counter 2 00:27:51.819 =====Discovery Log Entry 0====== 00:27:51.819 trtype: tcp 00:27:51.819 adrfam: ipv4 00:27:51.819 subtype: current discovery subsystem 00:27:51.819 treq: not specified, sq flow control disable supported 00:27:51.819 portid: 1 00:27:51.819 trsvcid: 4420 00:27:51.819 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:51.819 traddr: 10.0.0.1 00:27:51.819 eflags: none 00:27:51.819 sectype: none 00:27:51.819 =====Discovery Log Entry 1====== 00:27:51.819 trtype: tcp 00:27:51.819 adrfam: ipv4 00:27:51.819 subtype: nvme subsystem 00:27:51.819 treq: not specified, sq flow control disable supported 00:27:51.819 portid: 1 00:27:51.819 trsvcid: 4420 00:27:51.819 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:51.819 traddr: 10.0.0.1 00:27:51.819 eflags: none 00:27:51.819 sectype: none 00:27:51.819 09:02:08 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:51.819 09:02:08 -- host/auth.sh@37 -- # echo 0 00:27:51.819 09:02:08 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:51.819 09:02:08 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:51.819 09:02:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:51.819 09:02:08 -- host/auth.sh@44 -- # digest=sha256 00:27:51.819 09:02:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.819 09:02:08 -- host/auth.sh@44 -- # keyid=1 00:27:51.819 09:02:08 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:51.819 09:02:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:51.819 09:02:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:51.819 09:02:08 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:51.819 09:02:08 -- host/auth.sh@100 -- # IFS=, 00:27:51.819 09:02:08 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:27:51.819 09:02:08 -- host/auth.sh@100 -- # IFS=, 00:27:51.819 09:02:08 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:51.819 09:02:08 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:51.819 09:02:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:51.819 09:02:08 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:27:51.819 09:02:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:51.819 09:02:08 -- host/auth.sh@68 -- # keyid=1 00:27:51.819 09:02:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:51.819 09:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.819 09:02:08 -- common/autotest_common.sh@10 -- # set +x 00:27:51.819 09:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.819 09:02:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:51.819 09:02:08 -- nvmf/common.sh@717 -- # local ip 00:27:51.819 09:02:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:51.819 09:02:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:51.819 09:02:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.819 09:02:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.819 09:02:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:51.819 09:02:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.819 09:02:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:51.819 09:02:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:51.819 09:02:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:51.819 09:02:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:51.819 09:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.819 09:02:08 -- common/autotest_common.sh@10 -- # set +x 00:27:51.819 nvme0n1 00:27:51.819 09:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.819 09:02:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.819 09:02:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:51.819 09:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.819 09:02:08 -- common/autotest_common.sh@10 -- # set +x 00:27:51.819 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.819 09:02:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.819 09:02:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.819 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.819 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:51.819 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.819 09:02:09 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:27:51.819 09:02:09 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.819 09:02:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:51.819 09:02:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:51.819 09:02:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:51.819 09:02:09 -- host/auth.sh@44 -- # digest=sha256 00:27:51.819 09:02:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.819 09:02:09 -- host/auth.sh@44 -- # keyid=0 00:27:51.819 09:02:09 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:27:51.819 09:02:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:51.819 09:02:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:51.819 09:02:09 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:27:51.819 09:02:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:27:51.819 09:02:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:51.819 09:02:09 -- host/auth.sh@68 -- # digest=sha256 00:27:51.819 09:02:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:51.819 09:02:09 -- host/auth.sh@68 -- # keyid=0 00:27:51.819 09:02:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:51.819 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.819 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.078 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.078 09:02:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:52.078 09:02:09 -- nvmf/common.sh@717 -- # local ip 00:27:52.078 09:02:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:52.078 09:02:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:52.078 09:02:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.078 09:02:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.078 09:02:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:52.078 09:02:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.078 09:02:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:52.078 09:02:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:52.078 09:02:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:52.078 09:02:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:52.078 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.078 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.078 nvme0n1 00:27:52.078 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.078 09:02:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.078 09:02:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:52.078 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.078 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.078 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.078 09:02:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.078 09:02:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.078 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.078 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.078 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.078 09:02:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:52.078 09:02:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:52.078 09:02:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:52.078 09:02:09 -- host/auth.sh@44 -- # digest=sha256 00:27:52.078 09:02:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.078 09:02:09 -- host/auth.sh@44 -- # keyid=1 00:27:52.078 09:02:09 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:52.078 09:02:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:52.078 09:02:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:52.078 09:02:09 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:52.078 09:02:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:27:52.078 09:02:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:52.078 09:02:09 -- host/auth.sh@68 -- # digest=sha256 00:27:52.078 09:02:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:52.078 09:02:09 -- host/auth.sh@68 -- # keyid=1 00:27:52.078 09:02:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:52.078 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.078 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.078 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.078 09:02:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:52.078 09:02:09 -- nvmf/common.sh@717 -- # local ip 00:27:52.078 09:02:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:52.078 09:02:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:52.078 09:02:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.078 09:02:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.078 09:02:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:52.078 09:02:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.078 09:02:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:52.078 09:02:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:52.078 09:02:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:52.078 09:02:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:52.078 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.078 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.337 nvme0n1 00:27:52.337 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.337 09:02:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.337 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.337 09:02:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:52.337 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.337 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.337 09:02:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.337 09:02:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.337 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.337 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.337 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.337 09:02:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:52.337 09:02:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:52.337 09:02:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:52.337 09:02:09 -- host/auth.sh@44 -- # digest=sha256 00:27:52.337 09:02:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.337 09:02:09 -- host/auth.sh@44 -- # keyid=2 00:27:52.337 09:02:09 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:27:52.337 09:02:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:52.337 09:02:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:52.337 09:02:09 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:27:52.337 09:02:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:27:52.337 09:02:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:52.337 09:02:09 -- host/auth.sh@68 -- # digest=sha256 00:27:52.337 09:02:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:52.337 09:02:09 -- host/auth.sh@68 -- # keyid=2 00:27:52.337 09:02:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:52.337 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.337 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.337 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.337 09:02:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:52.337 09:02:09 -- nvmf/common.sh@717 -- # local ip 00:27:52.337 09:02:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:52.337 09:02:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:52.337 09:02:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.337 09:02:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.337 09:02:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:52.337 09:02:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.337 09:02:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:52.337 09:02:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:52.337 09:02:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:52.337 09:02:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:52.337 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.337 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.597 nvme0n1 00:27:52.597 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.597 09:02:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.597 09:02:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:52.597 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.597 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.597 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.597 09:02:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.597 09:02:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.597 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.597 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.597 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.597 09:02:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:52.597 09:02:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:52.597 09:02:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:52.597 09:02:09 -- host/auth.sh@44 -- # digest=sha256 00:27:52.597 09:02:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.597 09:02:09 -- host/auth.sh@44 -- # keyid=3 00:27:52.597 09:02:09 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:27:52.597 09:02:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:52.597 09:02:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:52.597 09:02:09 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:27:52.597 09:02:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:27:52.597 09:02:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:52.597 09:02:09 -- host/auth.sh@68 -- # digest=sha256 00:27:52.597 09:02:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:52.597 09:02:09 -- host/auth.sh@68 -- # keyid=3 00:27:52.597 09:02:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:52.597 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.597 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.597 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.597 09:02:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:52.597 09:02:09 -- nvmf/common.sh@717 -- # local ip 00:27:52.597 09:02:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:52.597 09:02:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:52.597 09:02:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.597 09:02:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.597 09:02:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:52.597 09:02:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.597 09:02:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:52.597 09:02:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:52.597 09:02:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:52.597 09:02:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:52.597 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.597 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.856 nvme0n1 00:27:52.856 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.856 09:02:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.856 09:02:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:52.856 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.856 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.856 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.856 09:02:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.856 09:02:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.856 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.856 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.856 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.856 09:02:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:52.856 09:02:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:52.856 09:02:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:52.856 09:02:09 -- host/auth.sh@44 -- # digest=sha256 00:27:52.856 09:02:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.856 09:02:09 -- host/auth.sh@44 -- # keyid=4 00:27:52.856 09:02:09 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:27:52.856 09:02:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:52.856 09:02:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:52.856 09:02:09 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:27:52.856 09:02:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:27:52.856 09:02:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:52.856 09:02:09 -- host/auth.sh@68 -- # digest=sha256 00:27:52.856 09:02:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:52.856 09:02:09 -- host/auth.sh@68 -- # keyid=4 00:27:52.856 09:02:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:52.856 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.856 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.856 09:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.856 09:02:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:52.856 09:02:09 -- nvmf/common.sh@717 -- # local ip 00:27:52.856 09:02:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:52.856 09:02:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:52.856 09:02:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.856 09:02:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.856 09:02:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:52.856 09:02:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.856 09:02:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:52.856 09:02:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:52.856 09:02:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:52.856 09:02:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.856 09:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.856 09:02:09 -- common/autotest_common.sh@10 -- # set +x 00:27:52.856 nvme0n1 00:27:52.856 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.856 09:02:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.856 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.856 09:02:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:52.856 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:52.856 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:52.856 09:02:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.856 09:02:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.856 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:52.856 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.114 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.114 09:02:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.114 09:02:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:53.114 09:02:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:53.114 09:02:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:53.114 09:02:10 -- host/auth.sh@44 -- # digest=sha256 00:27:53.114 09:02:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.114 09:02:10 -- host/auth.sh@44 -- # keyid=0 00:27:53.114 09:02:10 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:27:53.114 09:02:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:53.114 09:02:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:53.114 09:02:10 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:27:53.114 09:02:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:27:53.114 09:02:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:53.114 09:02:10 -- host/auth.sh@68 -- # digest=sha256 00:27:53.114 09:02:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:53.114 09:02:10 -- host/auth.sh@68 -- # keyid=0 00:27:53.114 09:02:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:53.114 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.114 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.114 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.114 09:02:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:53.114 09:02:10 -- nvmf/common.sh@717 -- # local ip 00:27:53.114 09:02:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:53.115 09:02:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:53.115 09:02:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.115 09:02:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.115 09:02:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:53.115 09:02:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.115 09:02:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:53.115 09:02:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:53.115 09:02:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:53.115 09:02:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:53.115 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.115 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.115 nvme0n1 00:27:53.115 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.115 09:02:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.115 09:02:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:53.115 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.115 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.115 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.115 09:02:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.115 09:02:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.115 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.115 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.373 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.373 09:02:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:53.373 09:02:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:53.373 09:02:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:53.373 09:02:10 -- host/auth.sh@44 -- # digest=sha256 00:27:53.373 09:02:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.373 09:02:10 -- host/auth.sh@44 -- # keyid=1 00:27:53.373 09:02:10 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:53.373 09:02:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:53.373 09:02:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:53.373 09:02:10 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:53.373 09:02:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:27:53.373 09:02:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:53.373 09:02:10 -- host/auth.sh@68 -- # digest=sha256 00:27:53.373 09:02:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:53.373 09:02:10 -- host/auth.sh@68 -- # keyid=1 00:27:53.373 09:02:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:53.373 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.373 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.373 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.373 09:02:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:53.373 09:02:10 -- nvmf/common.sh@717 -- # local ip 00:27:53.373 09:02:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:53.373 09:02:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:53.373 09:02:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.373 09:02:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.373 09:02:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:53.373 09:02:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.373 09:02:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:53.373 09:02:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:53.373 09:02:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:53.373 09:02:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:53.373 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.373 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.373 nvme0n1 00:27:53.373 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.373 09:02:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.373 09:02:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:53.373 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.374 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.374 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.374 09:02:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.374 09:02:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.374 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.374 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.374 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.374 09:02:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:53.374 09:02:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:53.374 09:02:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:53.374 09:02:10 -- host/auth.sh@44 -- # digest=sha256 00:27:53.374 09:02:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.374 09:02:10 -- host/auth.sh@44 -- # keyid=2 00:27:53.374 09:02:10 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:27:53.374 09:02:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:53.374 09:02:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:53.374 09:02:10 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:27:53.374 09:02:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:27:53.374 09:02:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:53.374 09:02:10 -- host/auth.sh@68 -- # digest=sha256 00:27:53.374 09:02:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:53.374 09:02:10 -- host/auth.sh@68 -- # keyid=2 00:27:53.374 09:02:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:53.374 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.374 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.374 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.374 09:02:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:53.374 09:02:10 -- nvmf/common.sh@717 -- # local ip 00:27:53.374 09:02:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:53.374 09:02:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:53.374 09:02:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.374 09:02:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.374 09:02:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:53.374 09:02:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.374 09:02:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:53.374 09:02:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:53.374 09:02:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:53.374 09:02:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:53.374 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.374 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.632 nvme0n1 00:27:53.632 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.632 09:02:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.632 09:02:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:53.632 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.632 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.632 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.632 09:02:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.632 09:02:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.632 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.632 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.632 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.632 09:02:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:53.632 09:02:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:53.632 09:02:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:53.632 09:02:10 -- host/auth.sh@44 -- # digest=sha256 00:27:53.632 09:02:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.632 09:02:10 -- host/auth.sh@44 -- # keyid=3 00:27:53.632 09:02:10 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:27:53.632 09:02:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:53.632 09:02:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:53.632 09:02:10 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:27:53.632 09:02:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:27:53.632 09:02:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:53.632 09:02:10 -- host/auth.sh@68 -- # digest=sha256 00:27:53.632 09:02:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:53.632 09:02:10 -- host/auth.sh@68 -- # keyid=3 00:27:53.632 09:02:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:53.632 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.632 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.632 09:02:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.632 09:02:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:53.632 09:02:10 -- nvmf/common.sh@717 -- # local ip 00:27:53.632 09:02:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:53.632 09:02:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:53.632 09:02:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.632 09:02:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.632 09:02:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:53.632 09:02:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.632 09:02:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:53.633 09:02:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:53.633 09:02:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:53.633 09:02:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:53.633 09:02:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.633 09:02:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.892 nvme0n1 00:27:53.892 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.892 09:02:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.892 09:02:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:53.892 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.892 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:53.892 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.892 09:02:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.892 09:02:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.892 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.892 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:53.892 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.892 09:02:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:53.892 09:02:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:53.892 09:02:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:53.892 09:02:11 -- host/auth.sh@44 -- # digest=sha256 00:27:53.892 09:02:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:53.892 09:02:11 -- host/auth.sh@44 -- # keyid=4 00:27:53.892 09:02:11 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:27:53.892 09:02:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:53.892 09:02:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:53.892 09:02:11 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:27:53.892 09:02:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:27:53.892 09:02:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:53.892 09:02:11 -- host/auth.sh@68 -- # digest=sha256 00:27:53.892 09:02:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:53.892 09:02:11 -- host/auth.sh@68 -- # keyid=4 00:27:53.892 09:02:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:53.892 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.892 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:53.892 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.892 09:02:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:53.892 09:02:11 -- nvmf/common.sh@717 -- # local ip 00:27:53.892 09:02:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:53.892 09:02:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:53.892 09:02:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.892 09:02:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.892 09:02:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:53.892 09:02:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.892 09:02:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:53.892 09:02:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:53.892 09:02:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:53.892 09:02:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.892 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.892 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.150 nvme0n1 00:27:54.150 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.150 09:02:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.150 09:02:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.150 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.150 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.150 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.150 09:02:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.150 09:02:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.150 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.150 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.150 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.150 09:02:11 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:54.150 09:02:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.150 09:02:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:54.150 09:02:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.150 09:02:11 -- host/auth.sh@44 -- # digest=sha256 00:27:54.150 09:02:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.150 09:02:11 -- host/auth.sh@44 -- # keyid=0 00:27:54.150 09:02:11 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:27:54.150 09:02:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.150 09:02:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:54.150 09:02:11 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:27:54.150 09:02:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:27:54.150 09:02:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.150 09:02:11 -- host/auth.sh@68 -- # digest=sha256 00:27:54.150 09:02:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:54.150 09:02:11 -- host/auth.sh@68 -- # keyid=0 00:27:54.150 09:02:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:54.150 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.150 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.150 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.150 09:02:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.150 09:02:11 -- nvmf/common.sh@717 -- # local ip 00:27:54.150 09:02:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.150 09:02:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.150 09:02:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.150 09:02:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.150 09:02:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.150 09:02:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.150 09:02:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.150 09:02:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.150 09:02:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.150 09:02:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:54.150 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.150 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.409 nvme0n1 00:27:54.409 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.409 09:02:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.409 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.409 09:02:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.409 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.409 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.409 09:02:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.409 09:02:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.409 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.409 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.409 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.409 09:02:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.409 09:02:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:54.409 09:02:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.409 09:02:11 -- host/auth.sh@44 -- # digest=sha256 00:27:54.409 09:02:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.409 09:02:11 -- host/auth.sh@44 -- # keyid=1 00:27:54.409 09:02:11 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:54.409 09:02:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.409 09:02:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:54.409 09:02:11 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:54.409 09:02:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:27:54.409 09:02:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.409 09:02:11 -- host/auth.sh@68 -- # digest=sha256 00:27:54.409 09:02:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:54.409 09:02:11 -- host/auth.sh@68 -- # keyid=1 00:27:54.409 09:02:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:54.409 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.409 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.409 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.409 09:02:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.409 09:02:11 -- nvmf/common.sh@717 -- # local ip 00:27:54.409 09:02:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.409 09:02:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.409 09:02:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.409 09:02:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.409 09:02:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.409 09:02:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.409 09:02:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.409 09:02:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.409 09:02:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.409 09:02:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:54.409 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.409 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.668 nvme0n1 00:27:54.668 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.668 09:02:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.668 09:02:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.668 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.668 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.668 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.668 09:02:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.668 09:02:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.668 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.668 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.668 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.668 09:02:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:54.668 09:02:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:54.668 09:02:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.668 09:02:11 -- host/auth.sh@44 -- # digest=sha256 00:27:54.668 09:02:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:54.668 09:02:11 -- host/auth.sh@44 -- # keyid=2 00:27:54.668 09:02:11 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:27:54.668 09:02:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.668 09:02:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:54.668 09:02:11 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:27:54.668 09:02:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:27:54.668 09:02:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.668 09:02:11 -- host/auth.sh@68 -- # digest=sha256 00:27:54.668 09:02:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:54.668 09:02:11 -- host/auth.sh@68 -- # keyid=2 00:27:54.668 09:02:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:54.668 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.668 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.926 09:02:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.926 09:02:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.926 09:02:11 -- nvmf/common.sh@717 -- # local ip 00:27:54.926 09:02:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.926 09:02:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.926 09:02:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.926 09:02:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.926 09:02:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.926 09:02:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.926 09:02:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.926 09:02:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.926 09:02:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.926 09:02:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:54.926 09:02:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.926 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:27:54.926 nvme0n1 00:27:54.926 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.926 09:02:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.926 09:02:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.926 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.926 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.185 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.185 09:02:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.185 09:02:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.185 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.185 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.185 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.185 09:02:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.185 09:02:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:55.185 09:02:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.185 09:02:12 -- host/auth.sh@44 -- # digest=sha256 00:27:55.185 09:02:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.185 09:02:12 -- host/auth.sh@44 -- # keyid=3 00:27:55.185 09:02:12 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:27:55.185 09:02:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.185 09:02:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:55.185 09:02:12 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:27:55.185 09:02:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:27:55.185 09:02:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.185 09:02:12 -- host/auth.sh@68 -- # digest=sha256 00:27:55.185 09:02:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:55.185 09:02:12 -- host/auth.sh@68 -- # keyid=3 00:27:55.185 09:02:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:55.185 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.185 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.185 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.185 09:02:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.185 09:02:12 -- nvmf/common.sh@717 -- # local ip 00:27:55.185 09:02:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.185 09:02:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.185 09:02:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.185 09:02:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.185 09:02:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.185 09:02:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.185 09:02:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.185 09:02:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.185 09:02:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.185 09:02:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:55.185 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.185 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.444 nvme0n1 00:27:55.444 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.444 09:02:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.444 09:02:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.444 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.444 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.444 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.444 09:02:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.444 09:02:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.444 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.444 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.444 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.444 09:02:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.444 09:02:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:55.444 09:02:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.444 09:02:12 -- host/auth.sh@44 -- # digest=sha256 00:27:55.444 09:02:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:55.444 09:02:12 -- host/auth.sh@44 -- # keyid=4 00:27:55.444 09:02:12 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:27:55.444 09:02:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.444 09:02:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:55.444 09:02:12 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:27:55.444 09:02:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:27:55.444 09:02:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.444 09:02:12 -- host/auth.sh@68 -- # digest=sha256 00:27:55.444 09:02:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:55.444 09:02:12 -- host/auth.sh@68 -- # keyid=4 00:27:55.444 09:02:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:55.444 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.444 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.444 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.444 09:02:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.444 09:02:12 -- nvmf/common.sh@717 -- # local ip 00:27:55.444 09:02:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.444 09:02:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.444 09:02:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.444 09:02:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.444 09:02:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.444 09:02:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.444 09:02:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.444 09:02:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.444 09:02:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.444 09:02:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.444 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.444 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.702 nvme0n1 00:27:55.702 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.702 09:02:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.702 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.702 09:02:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.702 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.702 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.702 09:02:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.702 09:02:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.702 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.702 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.702 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.702 09:02:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.702 09:02:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.702 09:02:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:55.702 09:02:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.702 09:02:12 -- host/auth.sh@44 -- # digest=sha256 00:27:55.702 09:02:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.702 09:02:12 -- host/auth.sh@44 -- # keyid=0 00:27:55.702 09:02:12 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:27:55.702 09:02:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.702 09:02:12 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:55.702 09:02:12 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:27:55.702 09:02:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:27:55.702 09:02:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.703 09:02:12 -- host/auth.sh@68 -- # digest=sha256 00:27:55.703 09:02:12 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:55.703 09:02:12 -- host/auth.sh@68 -- # keyid=0 00:27:55.703 09:02:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:55.703 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.703 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.703 09:02:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.703 09:02:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.703 09:02:12 -- nvmf/common.sh@717 -- # local ip 00:27:55.703 09:02:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.703 09:02:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.703 09:02:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.703 09:02:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.703 09:02:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.703 09:02:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.703 09:02:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.703 09:02:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.703 09:02:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.703 09:02:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:55.703 09:02:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.703 09:02:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.961 nvme0n1 00:27:55.961 09:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.961 09:02:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.961 09:02:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.961 09:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.961 09:02:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.219 09:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.220 09:02:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.220 09:02:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.220 09:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.220 09:02:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.220 09:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.220 09:02:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.220 09:02:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:56.220 09:02:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.220 09:02:13 -- host/auth.sh@44 -- # digest=sha256 00:27:56.220 09:02:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.220 09:02:13 -- host/auth.sh@44 -- # keyid=1 00:27:56.220 09:02:13 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:56.220 09:02:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.220 09:02:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:56.220 09:02:13 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:56.220 09:02:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:27:56.220 09:02:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.220 09:02:13 -- host/auth.sh@68 -- # digest=sha256 00:27:56.220 09:02:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:56.220 09:02:13 -- host/auth.sh@68 -- # keyid=1 00:27:56.220 09:02:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:56.220 09:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.220 09:02:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.220 09:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.220 09:02:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.220 09:02:13 -- nvmf/common.sh@717 -- # local ip 00:27:56.220 09:02:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.220 09:02:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.220 09:02:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.220 09:02:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.220 09:02:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.220 09:02:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.220 09:02:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.220 09:02:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.220 09:02:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.220 09:02:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:56.220 09:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.220 09:02:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.477 nvme0n1 00:27:56.477 09:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.477 09:02:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.477 09:02:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:56.477 09:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.477 09:02:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.477 09:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.477 09:02:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.477 09:02:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.477 09:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.477 09:02:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.477 09:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.477 09:02:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.477 09:02:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:56.477 09:02:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.477 09:02:13 -- host/auth.sh@44 -- # digest=sha256 00:27:56.477 09:02:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:56.477 09:02:13 -- host/auth.sh@44 -- # keyid=2 00:27:56.477 09:02:13 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:27:56.477 09:02:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.477 09:02:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:56.477 09:02:13 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:27:56.477 09:02:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:27:56.477 09:02:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.477 09:02:13 -- host/auth.sh@68 -- # digest=sha256 00:27:56.477 09:02:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:56.477 09:02:13 -- host/auth.sh@68 -- # keyid=2 00:27:56.477 09:02:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:56.477 09:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.477 09:02:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.478 09:02:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.478 09:02:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.478 09:02:13 -- nvmf/common.sh@717 -- # local ip 00:27:56.478 09:02:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.478 09:02:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.478 09:02:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.478 09:02:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.478 09:02:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.478 09:02:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.478 09:02:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.478 09:02:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.478 09:02:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.478 09:02:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:56.478 09:02:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.478 09:02:13 -- common/autotest_common.sh@10 -- # set +x 00:27:57.043 nvme0n1 00:27:57.043 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.043 09:02:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.043 09:02:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.044 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.044 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.044 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.044 09:02:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.044 09:02:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.044 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.044 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.044 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.044 09:02:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.044 09:02:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:57.044 09:02:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.044 09:02:14 -- host/auth.sh@44 -- # digest=sha256 00:27:57.044 09:02:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.044 09:02:14 -- host/auth.sh@44 -- # keyid=3 00:27:57.044 09:02:14 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:27:57.044 09:02:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:57.044 09:02:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:57.044 09:02:14 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:27:57.044 09:02:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:27:57.044 09:02:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.044 09:02:14 -- host/auth.sh@68 -- # digest=sha256 00:27:57.044 09:02:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:57.044 09:02:14 -- host/auth.sh@68 -- # keyid=3 00:27:57.044 09:02:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:57.044 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.044 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.044 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.044 09:02:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.044 09:02:14 -- nvmf/common.sh@717 -- # local ip 00:27:57.044 09:02:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.044 09:02:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.044 09:02:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.044 09:02:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.044 09:02:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.044 09:02:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.044 09:02:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.044 09:02:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.044 09:02:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.044 09:02:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:57.044 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.044 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.303 nvme0n1 00:27:57.303 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.303 09:02:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.303 09:02:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.303 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.303 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.303 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.303 09:02:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.303 09:02:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.303 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.303 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.303 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.303 09:02:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.303 09:02:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:57.303 09:02:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.303 09:02:14 -- host/auth.sh@44 -- # digest=sha256 00:27:57.303 09:02:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:57.303 09:02:14 -- host/auth.sh@44 -- # keyid=4 00:27:57.303 09:02:14 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:27:57.303 09:02:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:57.303 09:02:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:27:57.303 09:02:14 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:27:57.303 09:02:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:27:57.303 09:02:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.303 09:02:14 -- host/auth.sh@68 -- # digest=sha256 00:27:57.303 09:02:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:27:57.303 09:02:14 -- host/auth.sh@68 -- # keyid=4 00:27:57.303 09:02:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:57.303 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.303 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.303 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.303 09:02:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.303 09:02:14 -- nvmf/common.sh@717 -- # local ip 00:27:57.303 09:02:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.303 09:02:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.303 09:02:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.303 09:02:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.303 09:02:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.303 09:02:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.303 09:02:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.303 09:02:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.303 09:02:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.303 09:02:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.303 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.303 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.870 nvme0n1 00:27:57.870 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.870 09:02:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.870 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.870 09:02:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.870 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.870 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.870 09:02:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.870 09:02:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.870 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.870 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.870 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.870 09:02:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.870 09:02:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.870 09:02:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:57.870 09:02:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.870 09:02:14 -- host/auth.sh@44 -- # digest=sha256 00:27:57.870 09:02:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.870 09:02:14 -- host/auth.sh@44 -- # keyid=0 00:27:57.870 09:02:14 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:27:57.870 09:02:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:57.870 09:02:14 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:57.870 09:02:14 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:27:57.870 09:02:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:27:57.870 09:02:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.870 09:02:14 -- host/auth.sh@68 -- # digest=sha256 00:27:57.870 09:02:14 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:57.870 09:02:14 -- host/auth.sh@68 -- # keyid=0 00:27:57.870 09:02:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:57.870 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.870 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.870 09:02:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.870 09:02:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.870 09:02:14 -- nvmf/common.sh@717 -- # local ip 00:27:57.870 09:02:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.870 09:02:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.870 09:02:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.870 09:02:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.870 09:02:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.870 09:02:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.870 09:02:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.870 09:02:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.870 09:02:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.870 09:02:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:57.870 09:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.870 09:02:14 -- common/autotest_common.sh@10 -- # set +x 00:27:58.442 nvme0n1 00:27:58.442 09:02:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.442 09:02:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.442 09:02:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.442 09:02:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.442 09:02:15 -- common/autotest_common.sh@10 -- # set +x 00:27:58.442 09:02:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.442 09:02:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.442 09:02:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.442 09:02:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.442 09:02:15 -- common/autotest_common.sh@10 -- # set +x 00:27:58.442 09:02:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.442 09:02:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.442 09:02:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:58.442 09:02:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.442 09:02:15 -- host/auth.sh@44 -- # digest=sha256 00:27:58.442 09:02:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:58.442 09:02:15 -- host/auth.sh@44 -- # keyid=1 00:27:58.442 09:02:15 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:58.442 09:02:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:58.442 09:02:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:58.442 09:02:15 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:27:58.442 09:02:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:27:58.442 09:02:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.442 09:02:15 -- host/auth.sh@68 -- # digest=sha256 00:27:58.442 09:02:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:58.442 09:02:15 -- host/auth.sh@68 -- # keyid=1 00:27:58.442 09:02:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:58.442 09:02:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.442 09:02:15 -- common/autotest_common.sh@10 -- # set +x 00:27:58.443 09:02:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.443 09:02:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.443 09:02:15 -- nvmf/common.sh@717 -- # local ip 00:27:58.443 09:02:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.443 09:02:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.443 09:02:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.443 09:02:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.443 09:02:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.443 09:02:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.443 09:02:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.443 09:02:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.443 09:02:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.443 09:02:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:58.443 09:02:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.443 09:02:15 -- common/autotest_common.sh@10 -- # set +x 00:27:59.009 nvme0n1 00:27:59.009 09:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.009 09:02:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.009 09:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.009 09:02:16 -- common/autotest_common.sh@10 -- # set +x 00:27:59.009 09:02:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:59.009 09:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.009 09:02:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.009 09:02:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.009 09:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.009 09:02:16 -- common/autotest_common.sh@10 -- # set +x 00:27:59.009 09:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.009 09:02:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.009 09:02:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:59.009 09:02:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.009 09:02:16 -- host/auth.sh@44 -- # digest=sha256 00:27:59.009 09:02:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.009 09:02:16 -- host/auth.sh@44 -- # keyid=2 00:27:59.009 09:02:16 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:27:59.009 09:02:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:59.009 09:02:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:59.009 09:02:16 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:27:59.009 09:02:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:27:59.009 09:02:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:59.009 09:02:16 -- host/auth.sh@68 -- # digest=sha256 00:27:59.009 09:02:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:59.009 09:02:16 -- host/auth.sh@68 -- # keyid=2 00:27:59.009 09:02:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:59.009 09:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.009 09:02:16 -- common/autotest_common.sh@10 -- # set +x 00:27:59.009 09:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.009 09:02:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:59.009 09:02:16 -- nvmf/common.sh@717 -- # local ip 00:27:59.009 09:02:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:59.009 09:02:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:59.009 09:02:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.009 09:02:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.009 09:02:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:59.009 09:02:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.009 09:02:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:59.009 09:02:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:59.009 09:02:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:59.009 09:02:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:59.009 09:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.009 09:02:16 -- common/autotest_common.sh@10 -- # set +x 00:27:59.576 nvme0n1 00:27:59.576 09:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.576 09:02:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.576 09:02:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:59.576 09:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.576 09:02:16 -- common/autotest_common.sh@10 -- # set +x 00:27:59.576 09:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.576 09:02:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.576 09:02:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.576 09:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.576 09:02:16 -- common/autotest_common.sh@10 -- # set +x 00:27:59.576 09:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.576 09:02:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.576 09:02:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:59.576 09:02:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.576 09:02:16 -- host/auth.sh@44 -- # digest=sha256 00:27:59.576 09:02:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.577 09:02:16 -- host/auth.sh@44 -- # keyid=3 00:27:59.577 09:02:16 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:27:59.577 09:02:16 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:59.577 09:02:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:27:59.577 09:02:16 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:27:59.577 09:02:16 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:27:59.577 09:02:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:59.577 09:02:16 -- host/auth.sh@68 -- # digest=sha256 00:27:59.577 09:02:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:27:59.577 09:02:16 -- host/auth.sh@68 -- # keyid=3 00:27:59.577 09:02:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:59.577 09:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.577 09:02:16 -- common/autotest_common.sh@10 -- # set +x 00:27:59.577 09:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.577 09:02:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:59.577 09:02:16 -- nvmf/common.sh@717 -- # local ip 00:27:59.577 09:02:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:59.577 09:02:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:59.577 09:02:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.577 09:02:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.577 09:02:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:59.577 09:02:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.577 09:02:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:59.577 09:02:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:59.577 09:02:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:59.577 09:02:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:59.577 09:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.577 09:02:16 -- common/autotest_common.sh@10 -- # set +x 00:28:00.143 nvme0n1 00:28:00.143 09:02:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.143 09:02:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.143 09:02:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.143 09:02:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:00.143 09:02:17 -- common/autotest_common.sh@10 -- # set +x 00:28:00.401 09:02:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.401 09:02:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.401 09:02:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.401 09:02:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.401 09:02:17 -- common/autotest_common.sh@10 -- # set +x 00:28:00.401 09:02:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.401 09:02:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:00.401 09:02:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:00.401 09:02:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:00.401 09:02:17 -- host/auth.sh@44 -- # digest=sha256 00:28:00.401 09:02:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:00.401 09:02:17 -- host/auth.sh@44 -- # keyid=4 00:28:00.401 09:02:17 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:00.401 09:02:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:00.401 09:02:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:00.401 09:02:17 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:00.401 09:02:17 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:28:00.401 09:02:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:00.401 09:02:17 -- host/auth.sh@68 -- # digest=sha256 00:28:00.401 09:02:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:00.401 09:02:17 -- host/auth.sh@68 -- # keyid=4 00:28:00.401 09:02:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:00.401 09:02:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.401 09:02:17 -- common/autotest_common.sh@10 -- # set +x 00:28:00.401 09:02:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.401 09:02:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:00.401 09:02:17 -- nvmf/common.sh@717 -- # local ip 00:28:00.401 09:02:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:00.401 09:02:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:00.401 09:02:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.401 09:02:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.401 09:02:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:00.401 09:02:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.401 09:02:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:00.401 09:02:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:00.401 09:02:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:00.401 09:02:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.401 09:02:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.401 09:02:17 -- common/autotest_common.sh@10 -- # set +x 00:28:00.967 nvme0n1 00:28:00.967 09:02:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.967 09:02:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.967 09:02:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:00.967 09:02:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.967 09:02:17 -- common/autotest_common.sh@10 -- # set +x 00:28:00.967 09:02:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.967 09:02:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.967 09:02:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.967 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.967 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:00.967 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.967 09:02:18 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:28:00.967 09:02:18 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.967 09:02:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:00.967 09:02:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:00.967 09:02:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:00.967 09:02:18 -- host/auth.sh@44 -- # digest=sha384 00:28:00.967 09:02:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.967 09:02:18 -- host/auth.sh@44 -- # keyid=0 00:28:00.967 09:02:18 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:00.967 09:02:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:00.967 09:02:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:00.967 09:02:18 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:00.967 09:02:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:28:00.967 09:02:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:00.967 09:02:18 -- host/auth.sh@68 -- # digest=sha384 00:28:00.967 09:02:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:00.967 09:02:18 -- host/auth.sh@68 -- # keyid=0 00:28:00.967 09:02:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:00.967 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.967 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:00.967 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.967 09:02:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:00.967 09:02:18 -- nvmf/common.sh@717 -- # local ip 00:28:00.967 09:02:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:00.967 09:02:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:00.967 09:02:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.967 09:02:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.967 09:02:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:00.967 09:02:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.967 09:02:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:00.967 09:02:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:00.967 09:02:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:00.967 09:02:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:00.967 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.967 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:00.967 nvme0n1 00:28:00.967 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:00.968 09:02:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.968 09:02:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:00.968 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:00.968 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:00.968 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.226 09:02:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.226 09:02:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.227 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.227 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.227 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.227 09:02:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:01.227 09:02:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:01.227 09:02:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:01.227 09:02:18 -- host/auth.sh@44 -- # digest=sha384 00:28:01.227 09:02:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.227 09:02:18 -- host/auth.sh@44 -- # keyid=1 00:28:01.227 09:02:18 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:01.227 09:02:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:01.227 09:02:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:01.227 09:02:18 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:01.227 09:02:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:28:01.227 09:02:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.227 09:02:18 -- host/auth.sh@68 -- # digest=sha384 00:28:01.227 09:02:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:01.227 09:02:18 -- host/auth.sh@68 -- # keyid=1 00:28:01.227 09:02:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:01.227 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.227 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.227 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.227 09:02:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.227 09:02:18 -- nvmf/common.sh@717 -- # local ip 00:28:01.227 09:02:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.227 09:02:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.227 09:02:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.227 09:02:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.227 09:02:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.227 09:02:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.227 09:02:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.227 09:02:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.227 09:02:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.227 09:02:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:01.227 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.227 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.227 nvme0n1 00:28:01.227 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.227 09:02:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.227 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.227 09:02:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:01.227 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.227 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.227 09:02:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.227 09:02:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.227 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.227 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.227 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.227 09:02:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:01.227 09:02:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:01.227 09:02:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:01.227 09:02:18 -- host/auth.sh@44 -- # digest=sha384 00:28:01.227 09:02:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.227 09:02:18 -- host/auth.sh@44 -- # keyid=2 00:28:01.227 09:02:18 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:01.227 09:02:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:01.227 09:02:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:01.227 09:02:18 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:01.227 09:02:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:28:01.227 09:02:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.227 09:02:18 -- host/auth.sh@68 -- # digest=sha384 00:28:01.227 09:02:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:01.227 09:02:18 -- host/auth.sh@68 -- # keyid=2 00:28:01.227 09:02:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:01.227 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.227 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.227 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.227 09:02:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.227 09:02:18 -- nvmf/common.sh@717 -- # local ip 00:28:01.227 09:02:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.227 09:02:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.227 09:02:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.227 09:02:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.227 09:02:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.227 09:02:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.227 09:02:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.486 09:02:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.486 09:02:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.486 09:02:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.486 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.486 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.486 nvme0n1 00:28:01.486 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.486 09:02:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.486 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.486 09:02:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:01.486 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.486 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.486 09:02:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.486 09:02:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.486 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.486 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.486 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.486 09:02:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:01.486 09:02:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:01.486 09:02:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:01.486 09:02:18 -- host/auth.sh@44 -- # digest=sha384 00:28:01.486 09:02:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.486 09:02:18 -- host/auth.sh@44 -- # keyid=3 00:28:01.486 09:02:18 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:01.486 09:02:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:01.486 09:02:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:01.486 09:02:18 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:01.486 09:02:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:28:01.486 09:02:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.486 09:02:18 -- host/auth.sh@68 -- # digest=sha384 00:28:01.486 09:02:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:01.486 09:02:18 -- host/auth.sh@68 -- # keyid=3 00:28:01.486 09:02:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:01.486 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.486 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.486 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.486 09:02:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.486 09:02:18 -- nvmf/common.sh@717 -- # local ip 00:28:01.486 09:02:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.486 09:02:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.486 09:02:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.486 09:02:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.486 09:02:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.486 09:02:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.486 09:02:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.486 09:02:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.486 09:02:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.486 09:02:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:01.486 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.486 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.759 nvme0n1 00:28:01.759 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.759 09:02:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:01.759 09:02:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.759 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.759 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.759 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.759 09:02:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.759 09:02:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.759 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.759 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.759 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.759 09:02:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:01.759 09:02:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:01.759 09:02:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:01.759 09:02:18 -- host/auth.sh@44 -- # digest=sha384 00:28:01.759 09:02:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:01.759 09:02:18 -- host/auth.sh@44 -- # keyid=4 00:28:01.759 09:02:18 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:01.759 09:02:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:01.759 09:02:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:01.759 09:02:18 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:01.759 09:02:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:28:01.759 09:02:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.759 09:02:18 -- host/auth.sh@68 -- # digest=sha384 00:28:01.759 09:02:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:01.760 09:02:18 -- host/auth.sh@68 -- # keyid=4 00:28:01.760 09:02:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:01.760 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.760 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:01.760 09:02:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.760 09:02:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.760 09:02:18 -- nvmf/common.sh@717 -- # local ip 00:28:01.760 09:02:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.760 09:02:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.760 09:02:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.760 09:02:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.760 09:02:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.760 09:02:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.760 09:02:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.760 09:02:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.760 09:02:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.760 09:02:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.760 09:02:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.760 09:02:18 -- common/autotest_common.sh@10 -- # set +x 00:28:02.022 nvme0n1 00:28:02.022 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.022 09:02:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.022 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.022 09:02:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.022 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.022 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.022 09:02:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.022 09:02:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.022 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.022 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.022 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.022 09:02:19 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.022 09:02:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.022 09:02:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:02.023 09:02:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.023 09:02:19 -- host/auth.sh@44 -- # digest=sha384 00:28:02.023 09:02:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:02.023 09:02:19 -- host/auth.sh@44 -- # keyid=0 00:28:02.023 09:02:19 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:02.023 09:02:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:02.023 09:02:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:02.023 09:02:19 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:02.023 09:02:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:28:02.023 09:02:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.023 09:02:19 -- host/auth.sh@68 -- # digest=sha384 00:28:02.023 09:02:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:02.023 09:02:19 -- host/auth.sh@68 -- # keyid=0 00:28:02.023 09:02:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:02.023 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.023 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.023 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.023 09:02:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:02.023 09:02:19 -- nvmf/common.sh@717 -- # local ip 00:28:02.023 09:02:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:02.023 09:02:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:02.023 09:02:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.023 09:02:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.023 09:02:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:02.023 09:02:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.023 09:02:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:02.023 09:02:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:02.023 09:02:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:02.023 09:02:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:02.023 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.023 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.023 nvme0n1 00:28:02.023 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.023 09:02:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.023 09:02:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.023 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.023 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.281 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.281 09:02:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.281 09:02:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.281 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.281 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.281 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.281 09:02:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.281 09:02:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:02.281 09:02:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.281 09:02:19 -- host/auth.sh@44 -- # digest=sha384 00:28:02.281 09:02:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:02.281 09:02:19 -- host/auth.sh@44 -- # keyid=1 00:28:02.281 09:02:19 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:02.281 09:02:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:02.281 09:02:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:02.281 09:02:19 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:02.281 09:02:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:28:02.281 09:02:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.281 09:02:19 -- host/auth.sh@68 -- # digest=sha384 00:28:02.281 09:02:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:02.281 09:02:19 -- host/auth.sh@68 -- # keyid=1 00:28:02.281 09:02:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:02.281 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.281 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.281 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.281 09:02:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:02.281 09:02:19 -- nvmf/common.sh@717 -- # local ip 00:28:02.281 09:02:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:02.281 09:02:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:02.281 09:02:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.281 09:02:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.281 09:02:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:02.281 09:02:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.281 09:02:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:02.281 09:02:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:02.281 09:02:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:02.281 09:02:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:02.281 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.281 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.281 nvme0n1 00:28:02.281 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.281 09:02:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.281 09:02:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.281 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.281 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.281 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.540 09:02:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.540 09:02:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.540 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.540 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.540 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.540 09:02:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.540 09:02:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:02.540 09:02:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.540 09:02:19 -- host/auth.sh@44 -- # digest=sha384 00:28:02.540 09:02:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:02.540 09:02:19 -- host/auth.sh@44 -- # keyid=2 00:28:02.540 09:02:19 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:02.540 09:02:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:02.540 09:02:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:02.540 09:02:19 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:02.540 09:02:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:28:02.540 09:02:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.540 09:02:19 -- host/auth.sh@68 -- # digest=sha384 00:28:02.540 09:02:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:02.540 09:02:19 -- host/auth.sh@68 -- # keyid=2 00:28:02.540 09:02:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:02.540 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.540 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.540 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.540 09:02:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:02.540 09:02:19 -- nvmf/common.sh@717 -- # local ip 00:28:02.540 09:02:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:02.540 09:02:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:02.540 09:02:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.540 09:02:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.540 09:02:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:02.540 09:02:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.540 09:02:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:02.540 09:02:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:02.540 09:02:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:02.540 09:02:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:02.540 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.540 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.540 nvme0n1 00:28:02.540 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.540 09:02:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.540 09:02:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.540 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.540 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.540 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.540 09:02:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.540 09:02:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.540 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.540 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.540 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.540 09:02:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.540 09:02:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:02.540 09:02:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.540 09:02:19 -- host/auth.sh@44 -- # digest=sha384 00:28:02.540 09:02:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:02.540 09:02:19 -- host/auth.sh@44 -- # keyid=3 00:28:02.540 09:02:19 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:02.540 09:02:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:02.540 09:02:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:02.540 09:02:19 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:02.540 09:02:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:28:02.540 09:02:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.540 09:02:19 -- host/auth.sh@68 -- # digest=sha384 00:28:02.540 09:02:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:02.540 09:02:19 -- host/auth.sh@68 -- # keyid=3 00:28:02.540 09:02:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:02.540 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.540 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.798 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.798 09:02:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:02.798 09:02:19 -- nvmf/common.sh@717 -- # local ip 00:28:02.798 09:02:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:02.798 09:02:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:02.798 09:02:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.798 09:02:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.798 09:02:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:02.798 09:02:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.798 09:02:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:02.798 09:02:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:02.798 09:02:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:02.798 09:02:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:02.798 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.798 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.798 nvme0n1 00:28:02.798 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.798 09:02:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.798 09:02:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.798 09:02:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.798 09:02:19 -- common/autotest_common.sh@10 -- # set +x 00:28:02.798 09:02:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.798 09:02:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.798 09:02:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.798 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.798 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:02.798 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.798 09:02:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.798 09:02:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:02.798 09:02:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.798 09:02:20 -- host/auth.sh@44 -- # digest=sha384 00:28:02.798 09:02:20 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:02.798 09:02:20 -- host/auth.sh@44 -- # keyid=4 00:28:02.798 09:02:20 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:02.798 09:02:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:02.798 09:02:20 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:02.798 09:02:20 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:02.798 09:02:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:28:02.798 09:02:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.798 09:02:20 -- host/auth.sh@68 -- # digest=sha384 00:28:02.798 09:02:20 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:02.798 09:02:20 -- host/auth.sh@68 -- # keyid=4 00:28:02.798 09:02:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:02.798 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.798 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:02.798 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.798 09:02:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:02.798 09:02:20 -- nvmf/common.sh@717 -- # local ip 00:28:02.798 09:02:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:02.798 09:02:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:02.798 09:02:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.798 09:02:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.798 09:02:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:02.798 09:02:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.798 09:02:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:02.798 09:02:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:02.798 09:02:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:02.798 09:02:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.798 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.798 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.056 nvme0n1 00:28:03.056 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.056 09:02:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.056 09:02:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:03.056 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.056 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.056 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.056 09:02:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.056 09:02:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.056 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.056 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.056 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.056 09:02:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:03.056 09:02:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:03.056 09:02:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:03.056 09:02:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:03.056 09:02:20 -- host/auth.sh@44 -- # digest=sha384 00:28:03.056 09:02:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.056 09:02:20 -- host/auth.sh@44 -- # keyid=0 00:28:03.056 09:02:20 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:03.056 09:02:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:03.056 09:02:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:03.056 09:02:20 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:03.056 09:02:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:28:03.056 09:02:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:03.056 09:02:20 -- host/auth.sh@68 -- # digest=sha384 00:28:03.056 09:02:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:03.056 09:02:20 -- host/auth.sh@68 -- # keyid=0 00:28:03.056 09:02:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:03.056 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.056 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.056 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.056 09:02:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:03.056 09:02:20 -- nvmf/common.sh@717 -- # local ip 00:28:03.056 09:02:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:03.056 09:02:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:03.056 09:02:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.056 09:02:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.056 09:02:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:03.056 09:02:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.056 09:02:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:03.056 09:02:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:03.056 09:02:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:03.056 09:02:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:03.056 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.056 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.314 nvme0n1 00:28:03.314 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.314 09:02:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.314 09:02:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:03.314 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.314 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.314 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.314 09:02:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.314 09:02:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.314 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.314 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.571 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.571 09:02:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:03.571 09:02:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:03.571 09:02:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:03.571 09:02:20 -- host/auth.sh@44 -- # digest=sha384 00:28:03.571 09:02:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.571 09:02:20 -- host/auth.sh@44 -- # keyid=1 00:28:03.571 09:02:20 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:03.571 09:02:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:03.571 09:02:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:03.571 09:02:20 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:03.571 09:02:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:28:03.571 09:02:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:03.571 09:02:20 -- host/auth.sh@68 -- # digest=sha384 00:28:03.571 09:02:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:03.571 09:02:20 -- host/auth.sh@68 -- # keyid=1 00:28:03.571 09:02:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:03.571 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.571 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.571 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.571 09:02:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:03.571 09:02:20 -- nvmf/common.sh@717 -- # local ip 00:28:03.571 09:02:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:03.571 09:02:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:03.571 09:02:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.572 09:02:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.572 09:02:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:03.572 09:02:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.572 09:02:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:03.572 09:02:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:03.572 09:02:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:03.572 09:02:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:03.572 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.572 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.572 nvme0n1 00:28:03.572 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.572 09:02:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.572 09:02:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:03.572 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.572 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.829 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.829 09:02:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.829 09:02:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.829 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.829 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.829 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.829 09:02:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:03.829 09:02:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:03.829 09:02:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:03.829 09:02:20 -- host/auth.sh@44 -- # digest=sha384 00:28:03.829 09:02:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.829 09:02:20 -- host/auth.sh@44 -- # keyid=2 00:28:03.829 09:02:20 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:03.829 09:02:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:03.829 09:02:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:03.829 09:02:20 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:03.829 09:02:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:28:03.829 09:02:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:03.829 09:02:20 -- host/auth.sh@68 -- # digest=sha384 00:28:03.829 09:02:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:03.829 09:02:20 -- host/auth.sh@68 -- # keyid=2 00:28:03.829 09:02:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:03.829 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.829 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:03.829 09:02:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.829 09:02:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:03.829 09:02:20 -- nvmf/common.sh@717 -- # local ip 00:28:03.829 09:02:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:03.829 09:02:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:03.829 09:02:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.829 09:02:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.829 09:02:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:03.829 09:02:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.829 09:02:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:03.829 09:02:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:03.829 09:02:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:03.829 09:02:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:03.829 09:02:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.829 09:02:20 -- common/autotest_common.sh@10 -- # set +x 00:28:04.087 nvme0n1 00:28:04.087 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.087 09:02:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:04.087 09:02:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.087 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.087 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.087 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.087 09:02:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.087 09:02:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.087 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.087 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.087 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.087 09:02:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:04.087 09:02:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:04.087 09:02:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:04.087 09:02:21 -- host/auth.sh@44 -- # digest=sha384 00:28:04.087 09:02:21 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:04.087 09:02:21 -- host/auth.sh@44 -- # keyid=3 00:28:04.087 09:02:21 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:04.087 09:02:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:04.087 09:02:21 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:04.087 09:02:21 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:04.087 09:02:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:28:04.087 09:02:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:04.087 09:02:21 -- host/auth.sh@68 -- # digest=sha384 00:28:04.087 09:02:21 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:04.087 09:02:21 -- host/auth.sh@68 -- # keyid=3 00:28:04.087 09:02:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:04.087 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.087 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.087 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.087 09:02:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:04.087 09:02:21 -- nvmf/common.sh@717 -- # local ip 00:28:04.087 09:02:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:04.087 09:02:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:04.087 09:02:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.087 09:02:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.087 09:02:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:04.087 09:02:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.087 09:02:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:04.087 09:02:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:04.087 09:02:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:04.087 09:02:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:04.087 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.087 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.344 nvme0n1 00:28:04.344 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.344 09:02:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:04.344 09:02:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.344 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.344 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.344 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.344 09:02:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.344 09:02:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.344 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.344 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.344 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.344 09:02:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:04.344 09:02:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:04.344 09:02:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:04.344 09:02:21 -- host/auth.sh@44 -- # digest=sha384 00:28:04.344 09:02:21 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:04.344 09:02:21 -- host/auth.sh@44 -- # keyid=4 00:28:04.344 09:02:21 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:04.344 09:02:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:04.344 09:02:21 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:04.344 09:02:21 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:04.344 09:02:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:28:04.344 09:02:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:04.344 09:02:21 -- host/auth.sh@68 -- # digest=sha384 00:28:04.344 09:02:21 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:04.344 09:02:21 -- host/auth.sh@68 -- # keyid=4 00:28:04.344 09:02:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:04.344 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.344 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.344 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.344 09:02:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:04.345 09:02:21 -- nvmf/common.sh@717 -- # local ip 00:28:04.345 09:02:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:04.345 09:02:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:04.345 09:02:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.345 09:02:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.345 09:02:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:04.345 09:02:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.345 09:02:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:04.345 09:02:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:04.345 09:02:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:04.345 09:02:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:04.345 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.345 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.602 nvme0n1 00:28:04.602 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.602 09:02:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.602 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.602 09:02:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:04.602 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.602 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.602 09:02:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.602 09:02:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.602 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.602 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.602 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.602 09:02:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.602 09:02:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:04.602 09:02:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:04.602 09:02:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:04.602 09:02:21 -- host/auth.sh@44 -- # digest=sha384 00:28:04.602 09:02:21 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.602 09:02:21 -- host/auth.sh@44 -- # keyid=0 00:28:04.602 09:02:21 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:04.602 09:02:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:04.602 09:02:21 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:04.602 09:02:21 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:04.602 09:02:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:28:04.602 09:02:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:04.602 09:02:21 -- host/auth.sh@68 -- # digest=sha384 00:28:04.602 09:02:21 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:04.602 09:02:21 -- host/auth.sh@68 -- # keyid=0 00:28:04.602 09:02:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:04.602 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.602 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.602 09:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.602 09:02:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:04.602 09:02:21 -- nvmf/common.sh@717 -- # local ip 00:28:04.602 09:02:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:04.602 09:02:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:04.602 09:02:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.602 09:02:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.602 09:02:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:04.602 09:02:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.602 09:02:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:04.602 09:02:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:04.602 09:02:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:04.602 09:02:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:04.602 09:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.602 09:02:21 -- common/autotest_common.sh@10 -- # set +x 00:28:05.167 nvme0n1 00:28:05.167 09:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.167 09:02:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.167 09:02:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:05.167 09:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.167 09:02:22 -- common/autotest_common.sh@10 -- # set +x 00:28:05.167 09:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.167 09:02:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.167 09:02:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.167 09:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.167 09:02:22 -- common/autotest_common.sh@10 -- # set +x 00:28:05.167 09:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.167 09:02:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:05.167 09:02:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:05.167 09:02:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:05.167 09:02:22 -- host/auth.sh@44 -- # digest=sha384 00:28:05.167 09:02:22 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.167 09:02:22 -- host/auth.sh@44 -- # keyid=1 00:28:05.167 09:02:22 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:05.168 09:02:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:05.168 09:02:22 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:05.168 09:02:22 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:05.168 09:02:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:28:05.168 09:02:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:05.168 09:02:22 -- host/auth.sh@68 -- # digest=sha384 00:28:05.168 09:02:22 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:05.168 09:02:22 -- host/auth.sh@68 -- # keyid=1 00:28:05.168 09:02:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:05.168 09:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.168 09:02:22 -- common/autotest_common.sh@10 -- # set +x 00:28:05.168 09:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.168 09:02:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:05.168 09:02:22 -- nvmf/common.sh@717 -- # local ip 00:28:05.168 09:02:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:05.168 09:02:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:05.168 09:02:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.168 09:02:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.168 09:02:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:05.168 09:02:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.168 09:02:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:05.168 09:02:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:05.168 09:02:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:05.168 09:02:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:05.168 09:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.168 09:02:22 -- common/autotest_common.sh@10 -- # set +x 00:28:05.424 nvme0n1 00:28:05.424 09:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.424 09:02:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.424 09:02:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:05.424 09:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.424 09:02:22 -- common/autotest_common.sh@10 -- # set +x 00:28:05.424 09:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.424 09:02:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.424 09:02:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.424 09:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.424 09:02:22 -- common/autotest_common.sh@10 -- # set +x 00:28:05.424 09:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.424 09:02:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:05.424 09:02:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:05.424 09:02:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:05.424 09:02:22 -- host/auth.sh@44 -- # digest=sha384 00:28:05.424 09:02:22 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.424 09:02:22 -- host/auth.sh@44 -- # keyid=2 00:28:05.424 09:02:22 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:05.424 09:02:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:05.424 09:02:22 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:05.424 09:02:22 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:05.424 09:02:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:28:05.424 09:02:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:05.424 09:02:22 -- host/auth.sh@68 -- # digest=sha384 00:28:05.424 09:02:22 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:05.424 09:02:22 -- host/auth.sh@68 -- # keyid=2 00:28:05.424 09:02:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:05.424 09:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.424 09:02:22 -- common/autotest_common.sh@10 -- # set +x 00:28:05.424 09:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.424 09:02:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:05.424 09:02:22 -- nvmf/common.sh@717 -- # local ip 00:28:05.425 09:02:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:05.425 09:02:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:05.425 09:02:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.425 09:02:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.425 09:02:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:05.425 09:02:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.425 09:02:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:05.425 09:02:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:05.425 09:02:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:05.425 09:02:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:05.425 09:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.425 09:02:22 -- common/autotest_common.sh@10 -- # set +x 00:28:05.990 nvme0n1 00:28:05.990 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.990 09:02:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:05.990 09:02:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.990 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.990 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:05.990 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.990 09:02:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.990 09:02:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.990 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.990 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:05.990 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.990 09:02:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:05.990 09:02:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:05.990 09:02:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:05.990 09:02:23 -- host/auth.sh@44 -- # digest=sha384 00:28:05.990 09:02:23 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.990 09:02:23 -- host/auth.sh@44 -- # keyid=3 00:28:05.990 09:02:23 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:05.990 09:02:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:05.990 09:02:23 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:05.990 09:02:23 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:05.990 09:02:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:28:05.990 09:02:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:05.990 09:02:23 -- host/auth.sh@68 -- # digest=sha384 00:28:05.990 09:02:23 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:05.990 09:02:23 -- host/auth.sh@68 -- # keyid=3 00:28:05.990 09:02:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:05.990 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.990 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:05.990 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:05.990 09:02:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:05.990 09:02:23 -- nvmf/common.sh@717 -- # local ip 00:28:05.990 09:02:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:05.990 09:02:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:05.990 09:02:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.990 09:02:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.990 09:02:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:05.990 09:02:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.990 09:02:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:05.990 09:02:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:05.990 09:02:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:05.990 09:02:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:05.990 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:05.990 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:06.255 nvme0n1 00:28:06.255 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.255 09:02:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.255 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.255 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:06.255 09:02:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.255 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.255 09:02:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.255 09:02:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.255 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.255 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:06.255 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.255 09:02:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.255 09:02:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:06.255 09:02:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.255 09:02:23 -- host/auth.sh@44 -- # digest=sha384 00:28:06.255 09:02:23 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:06.255 09:02:23 -- host/auth.sh@44 -- # keyid=4 00:28:06.255 09:02:23 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:06.255 09:02:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:06.255 09:02:23 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:06.255 09:02:23 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:06.255 09:02:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:28:06.255 09:02:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.255 09:02:23 -- host/auth.sh@68 -- # digest=sha384 00:28:06.255 09:02:23 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:06.255 09:02:23 -- host/auth.sh@68 -- # keyid=4 00:28:06.255 09:02:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:06.255 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.255 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:06.511 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.511 09:02:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.511 09:02:23 -- nvmf/common.sh@717 -- # local ip 00:28:06.511 09:02:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.511 09:02:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.511 09:02:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.511 09:02:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.511 09:02:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.511 09:02:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.511 09:02:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.511 09:02:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.511 09:02:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.511 09:02:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.511 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.511 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:06.768 nvme0n1 00:28:06.768 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.768 09:02:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.768 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.768 09:02:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:06.768 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:06.768 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.768 09:02:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.768 09:02:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.768 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.768 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:06.768 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.768 09:02:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.768 09:02:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:06.768 09:02:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:06.768 09:02:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:06.768 09:02:23 -- host/auth.sh@44 -- # digest=sha384 00:28:06.768 09:02:23 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.768 09:02:23 -- host/auth.sh@44 -- # keyid=0 00:28:06.768 09:02:23 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:06.768 09:02:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:06.768 09:02:23 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:06.768 09:02:23 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:06.768 09:02:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:28:06.768 09:02:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:06.768 09:02:23 -- host/auth.sh@68 -- # digest=sha384 00:28:06.768 09:02:23 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:06.768 09:02:23 -- host/auth.sh@68 -- # keyid=0 00:28:06.768 09:02:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:06.768 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.768 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:06.768 09:02:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:06.768 09:02:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:06.768 09:02:23 -- nvmf/common.sh@717 -- # local ip 00:28:06.768 09:02:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:06.768 09:02:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:06.768 09:02:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.768 09:02:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.768 09:02:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:06.768 09:02:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.768 09:02:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:06.768 09:02:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:06.768 09:02:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:06.768 09:02:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:06.768 09:02:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:06.768 09:02:23 -- common/autotest_common.sh@10 -- # set +x 00:28:07.332 nvme0n1 00:28:07.332 09:02:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.332 09:02:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.332 09:02:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.332 09:02:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.332 09:02:24 -- common/autotest_common.sh@10 -- # set +x 00:28:07.332 09:02:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.332 09:02:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.332 09:02:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.332 09:02:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.332 09:02:24 -- common/autotest_common.sh@10 -- # set +x 00:28:07.332 09:02:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.332 09:02:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:07.332 09:02:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:07.332 09:02:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:07.332 09:02:24 -- host/auth.sh@44 -- # digest=sha384 00:28:07.332 09:02:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.332 09:02:24 -- host/auth.sh@44 -- # keyid=1 00:28:07.332 09:02:24 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:07.332 09:02:24 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:07.332 09:02:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:07.332 09:02:24 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:07.332 09:02:24 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:28:07.332 09:02:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.332 09:02:24 -- host/auth.sh@68 -- # digest=sha384 00:28:07.332 09:02:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:07.332 09:02:24 -- host/auth.sh@68 -- # keyid=1 00:28:07.332 09:02:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:07.332 09:02:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.332 09:02:24 -- common/autotest_common.sh@10 -- # set +x 00:28:07.332 09:02:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.333 09:02:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.333 09:02:24 -- nvmf/common.sh@717 -- # local ip 00:28:07.333 09:02:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.333 09:02:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.333 09:02:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.333 09:02:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.333 09:02:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.333 09:02:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.333 09:02:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.333 09:02:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.333 09:02:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.333 09:02:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:07.333 09:02:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.333 09:02:24 -- common/autotest_common.sh@10 -- # set +x 00:28:07.896 nvme0n1 00:28:07.896 09:02:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.896 09:02:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.896 09:02:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.896 09:02:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.896 09:02:25 -- common/autotest_common.sh@10 -- # set +x 00:28:07.896 09:02:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.154 09:02:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.154 09:02:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.154 09:02:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.154 09:02:25 -- common/autotest_common.sh@10 -- # set +x 00:28:08.154 09:02:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.154 09:02:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.154 09:02:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:08.154 09:02:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.154 09:02:25 -- host/auth.sh@44 -- # digest=sha384 00:28:08.154 09:02:25 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.154 09:02:25 -- host/auth.sh@44 -- # keyid=2 00:28:08.154 09:02:25 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:08.154 09:02:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:08.154 09:02:25 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:08.154 09:02:25 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:08.154 09:02:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:28:08.154 09:02:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.154 09:02:25 -- host/auth.sh@68 -- # digest=sha384 00:28:08.154 09:02:25 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:08.154 09:02:25 -- host/auth.sh@68 -- # keyid=2 00:28:08.154 09:02:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:08.154 09:02:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.154 09:02:25 -- common/autotest_common.sh@10 -- # set +x 00:28:08.154 09:02:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.154 09:02:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.154 09:02:25 -- nvmf/common.sh@717 -- # local ip 00:28:08.154 09:02:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.154 09:02:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.154 09:02:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.154 09:02:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.154 09:02:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.154 09:02:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.154 09:02:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.154 09:02:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.154 09:02:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.154 09:02:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:08.154 09:02:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.154 09:02:25 -- common/autotest_common.sh@10 -- # set +x 00:28:08.722 nvme0n1 00:28:08.722 09:02:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.722 09:02:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.722 09:02:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.722 09:02:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.722 09:02:25 -- common/autotest_common.sh@10 -- # set +x 00:28:08.722 09:02:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.722 09:02:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.722 09:02:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.722 09:02:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.722 09:02:25 -- common/autotest_common.sh@10 -- # set +x 00:28:08.722 09:02:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.722 09:02:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.722 09:02:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:08.722 09:02:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.722 09:02:25 -- host/auth.sh@44 -- # digest=sha384 00:28:08.722 09:02:25 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.722 09:02:25 -- host/auth.sh@44 -- # keyid=3 00:28:08.722 09:02:25 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:08.722 09:02:25 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:08.722 09:02:25 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:08.722 09:02:25 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:08.722 09:02:25 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:28:08.722 09:02:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.722 09:02:25 -- host/auth.sh@68 -- # digest=sha384 00:28:08.722 09:02:25 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:08.722 09:02:25 -- host/auth.sh@68 -- # keyid=3 00:28:08.722 09:02:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:08.722 09:02:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.722 09:02:25 -- common/autotest_common.sh@10 -- # set +x 00:28:08.722 09:02:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.722 09:02:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.722 09:02:25 -- nvmf/common.sh@717 -- # local ip 00:28:08.722 09:02:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.722 09:02:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.722 09:02:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.722 09:02:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.722 09:02:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.722 09:02:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.722 09:02:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.722 09:02:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.722 09:02:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.722 09:02:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:08.722 09:02:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.722 09:02:25 -- common/autotest_common.sh@10 -- # set +x 00:28:09.287 nvme0n1 00:28:09.287 09:02:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.287 09:02:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:09.287 09:02:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.287 09:02:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.287 09:02:26 -- common/autotest_common.sh@10 -- # set +x 00:28:09.287 09:02:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.287 09:02:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.287 09:02:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.287 09:02:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.287 09:02:26 -- common/autotest_common.sh@10 -- # set +x 00:28:09.287 09:02:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.287 09:02:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:09.287 09:02:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:09.288 09:02:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:09.288 09:02:26 -- host/auth.sh@44 -- # digest=sha384 00:28:09.288 09:02:26 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.288 09:02:26 -- host/auth.sh@44 -- # keyid=4 00:28:09.288 09:02:26 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:09.288 09:02:26 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:09.288 09:02:26 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:09.288 09:02:26 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:09.288 09:02:26 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:28:09.288 09:02:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:09.288 09:02:26 -- host/auth.sh@68 -- # digest=sha384 00:28:09.288 09:02:26 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:09.288 09:02:26 -- host/auth.sh@68 -- # keyid=4 00:28:09.288 09:02:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:09.288 09:02:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.288 09:02:26 -- common/autotest_common.sh@10 -- # set +x 00:28:09.288 09:02:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.288 09:02:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:09.288 09:02:26 -- nvmf/common.sh@717 -- # local ip 00:28:09.288 09:02:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:09.288 09:02:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:09.288 09:02:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.288 09:02:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.288 09:02:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:09.288 09:02:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.288 09:02:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:09.288 09:02:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:09.288 09:02:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:09.288 09:02:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.288 09:02:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.288 09:02:26 -- common/autotest_common.sh@10 -- # set +x 00:28:09.853 nvme0n1 00:28:09.853 09:02:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.853 09:02:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.853 09:02:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.853 09:02:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:09.853 09:02:26 -- common/autotest_common.sh@10 -- # set +x 00:28:09.853 09:02:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.853 09:02:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.853 09:02:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.853 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.853 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:09.853 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.853 09:02:27 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:28:09.853 09:02:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.853 09:02:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:09.853 09:02:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:09.853 09:02:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:09.853 09:02:27 -- host/auth.sh@44 -- # digest=sha512 00:28:09.853 09:02:27 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.853 09:02:27 -- host/auth.sh@44 -- # keyid=0 00:28:09.853 09:02:27 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:09.853 09:02:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:09.853 09:02:27 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:09.853 09:02:27 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:09.853 09:02:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:28:09.853 09:02:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:09.853 09:02:27 -- host/auth.sh@68 -- # digest=sha512 00:28:09.853 09:02:27 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:09.853 09:02:27 -- host/auth.sh@68 -- # keyid=0 00:28:09.853 09:02:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:09.853 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.853 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:09.853 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.853 09:02:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:09.853 09:02:27 -- nvmf/common.sh@717 -- # local ip 00:28:09.853 09:02:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:09.853 09:02:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:09.853 09:02:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.853 09:02:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.853 09:02:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:09.853 09:02:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.853 09:02:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:09.853 09:02:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:09.853 09:02:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:09.853 09:02:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:09.853 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.853 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.111 nvme0n1 00:28:10.111 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.111 09:02:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.112 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.112 09:02:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.112 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.112 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.112 09:02:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.112 09:02:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.112 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.112 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.112 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.112 09:02:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.112 09:02:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:10.112 09:02:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.112 09:02:27 -- host/auth.sh@44 -- # digest=sha512 00:28:10.112 09:02:27 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.112 09:02:27 -- host/auth.sh@44 -- # keyid=1 00:28:10.112 09:02:27 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:10.112 09:02:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:10.112 09:02:27 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:10.112 09:02:27 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:10.112 09:02:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:28:10.112 09:02:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.112 09:02:27 -- host/auth.sh@68 -- # digest=sha512 00:28:10.112 09:02:27 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:10.112 09:02:27 -- host/auth.sh@68 -- # keyid=1 00:28:10.112 09:02:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.112 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.112 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.112 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.112 09:02:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.112 09:02:27 -- nvmf/common.sh@717 -- # local ip 00:28:10.112 09:02:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.112 09:02:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.112 09:02:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.112 09:02:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.112 09:02:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.112 09:02:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.112 09:02:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.112 09:02:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.112 09:02:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.112 09:02:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:10.112 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.112 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.371 nvme0n1 00:28:10.371 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.371 09:02:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.371 09:02:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.371 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.371 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.371 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.371 09:02:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.371 09:02:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.371 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.371 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.371 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.371 09:02:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.371 09:02:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:10.371 09:02:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.371 09:02:27 -- host/auth.sh@44 -- # digest=sha512 00:28:10.371 09:02:27 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.371 09:02:27 -- host/auth.sh@44 -- # keyid=2 00:28:10.371 09:02:27 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:10.371 09:02:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:10.371 09:02:27 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:10.371 09:02:27 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:10.371 09:02:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:28:10.371 09:02:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.371 09:02:27 -- host/auth.sh@68 -- # digest=sha512 00:28:10.371 09:02:27 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:10.371 09:02:27 -- host/auth.sh@68 -- # keyid=2 00:28:10.371 09:02:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.371 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.371 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.371 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.371 09:02:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.371 09:02:27 -- nvmf/common.sh@717 -- # local ip 00:28:10.371 09:02:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.371 09:02:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.371 09:02:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.371 09:02:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.371 09:02:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.371 09:02:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.371 09:02:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.371 09:02:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.371 09:02:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.371 09:02:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:10.371 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.371 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.371 nvme0n1 00:28:10.371 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.371 09:02:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.371 09:02:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.371 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.371 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.371 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.630 09:02:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.630 09:02:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.630 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.630 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.630 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.630 09:02:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.630 09:02:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:10.630 09:02:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.630 09:02:27 -- host/auth.sh@44 -- # digest=sha512 00:28:10.630 09:02:27 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.630 09:02:27 -- host/auth.sh@44 -- # keyid=3 00:28:10.630 09:02:27 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:10.630 09:02:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:10.630 09:02:27 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:10.630 09:02:27 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:10.630 09:02:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:28:10.630 09:02:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.630 09:02:27 -- host/auth.sh@68 -- # digest=sha512 00:28:10.630 09:02:27 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:10.630 09:02:27 -- host/auth.sh@68 -- # keyid=3 00:28:10.630 09:02:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.630 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.630 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.630 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.630 09:02:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.630 09:02:27 -- nvmf/common.sh@717 -- # local ip 00:28:10.630 09:02:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.630 09:02:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.630 09:02:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.630 09:02:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.630 09:02:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.630 09:02:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.630 09:02:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.630 09:02:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.630 09:02:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.630 09:02:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:10.630 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.630 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.630 nvme0n1 00:28:10.630 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.630 09:02:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.630 09:02:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.630 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.630 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.630 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.630 09:02:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.630 09:02:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.630 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.630 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.630 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.630 09:02:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.630 09:02:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:10.630 09:02:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.630 09:02:27 -- host/auth.sh@44 -- # digest=sha512 00:28:10.630 09:02:27 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.630 09:02:27 -- host/auth.sh@44 -- # keyid=4 00:28:10.630 09:02:27 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:10.630 09:02:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:10.630 09:02:27 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:10.630 09:02:27 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:10.630 09:02:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:28:10.630 09:02:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.630 09:02:27 -- host/auth.sh@68 -- # digest=sha512 00:28:10.630 09:02:27 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:10.630 09:02:27 -- host/auth.sh@68 -- # keyid=4 00:28:10.630 09:02:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:10.630 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.630 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.888 09:02:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.888 09:02:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.888 09:02:27 -- nvmf/common.sh@717 -- # local ip 00:28:10.888 09:02:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.888 09:02:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.888 09:02:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.888 09:02:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.888 09:02:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.888 09:02:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.888 09:02:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.888 09:02:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.888 09:02:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.888 09:02:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.888 09:02:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.888 09:02:27 -- common/autotest_common.sh@10 -- # set +x 00:28:10.888 nvme0n1 00:28:10.888 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.888 09:02:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.888 09:02:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.888 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.888 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:10.888 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.888 09:02:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.888 09:02:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.888 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.888 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:10.888 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.888 09:02:28 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.888 09:02:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.888 09:02:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:10.888 09:02:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.888 09:02:28 -- host/auth.sh@44 -- # digest=sha512 00:28:10.888 09:02:28 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.888 09:02:28 -- host/auth.sh@44 -- # keyid=0 00:28:10.888 09:02:28 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:10.888 09:02:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:10.888 09:02:28 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:10.888 09:02:28 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:10.888 09:02:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:28:10.888 09:02:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.888 09:02:28 -- host/auth.sh@68 -- # digest=sha512 00:28:10.888 09:02:28 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:10.888 09:02:28 -- host/auth.sh@68 -- # keyid=0 00:28:10.888 09:02:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.888 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.888 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:10.888 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.888 09:02:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.888 09:02:28 -- nvmf/common.sh@717 -- # local ip 00:28:10.888 09:02:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.888 09:02:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.888 09:02:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.888 09:02:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.888 09:02:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.888 09:02:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.888 09:02:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.888 09:02:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.888 09:02:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.888 09:02:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:10.888 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.888 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.146 nvme0n1 00:28:11.146 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.146 09:02:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.146 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.146 09:02:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.146 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.146 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.146 09:02:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.146 09:02:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.146 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.146 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.146 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.146 09:02:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.146 09:02:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:11.146 09:02:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.146 09:02:28 -- host/auth.sh@44 -- # digest=sha512 00:28:11.146 09:02:28 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.146 09:02:28 -- host/auth.sh@44 -- # keyid=1 00:28:11.146 09:02:28 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:11.146 09:02:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:11.146 09:02:28 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:11.146 09:02:28 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:11.146 09:02:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:28:11.146 09:02:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.146 09:02:28 -- host/auth.sh@68 -- # digest=sha512 00:28:11.146 09:02:28 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:11.146 09:02:28 -- host/auth.sh@68 -- # keyid=1 00:28:11.146 09:02:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:11.146 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.146 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.146 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.146 09:02:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.146 09:02:28 -- nvmf/common.sh@717 -- # local ip 00:28:11.146 09:02:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.146 09:02:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.146 09:02:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.146 09:02:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.146 09:02:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.146 09:02:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.146 09:02:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.146 09:02:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.146 09:02:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.146 09:02:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:11.146 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.146 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.405 nvme0n1 00:28:11.405 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.405 09:02:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.405 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.405 09:02:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.405 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.405 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.405 09:02:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.405 09:02:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.405 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.405 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.405 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.405 09:02:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.405 09:02:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:11.405 09:02:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.405 09:02:28 -- host/auth.sh@44 -- # digest=sha512 00:28:11.405 09:02:28 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.405 09:02:28 -- host/auth.sh@44 -- # keyid=2 00:28:11.405 09:02:28 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:11.405 09:02:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:11.405 09:02:28 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:11.405 09:02:28 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:11.405 09:02:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:28:11.405 09:02:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.405 09:02:28 -- host/auth.sh@68 -- # digest=sha512 00:28:11.405 09:02:28 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:11.405 09:02:28 -- host/auth.sh@68 -- # keyid=2 00:28:11.405 09:02:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:11.405 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.405 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.405 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.405 09:02:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.405 09:02:28 -- nvmf/common.sh@717 -- # local ip 00:28:11.405 09:02:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.405 09:02:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.405 09:02:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.405 09:02:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.405 09:02:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.405 09:02:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.405 09:02:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.405 09:02:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.405 09:02:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.405 09:02:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:11.405 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.405 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.663 nvme0n1 00:28:11.663 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.663 09:02:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.663 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.663 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.663 09:02:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.663 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.663 09:02:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.663 09:02:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.663 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.663 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.663 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.663 09:02:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.663 09:02:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:11.663 09:02:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.663 09:02:28 -- host/auth.sh@44 -- # digest=sha512 00:28:11.663 09:02:28 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.663 09:02:28 -- host/auth.sh@44 -- # keyid=3 00:28:11.663 09:02:28 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:11.663 09:02:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:11.663 09:02:28 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:11.663 09:02:28 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:11.663 09:02:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:28:11.663 09:02:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.663 09:02:28 -- host/auth.sh@68 -- # digest=sha512 00:28:11.663 09:02:28 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:11.663 09:02:28 -- host/auth.sh@68 -- # keyid=3 00:28:11.663 09:02:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:11.663 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.663 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.663 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.663 09:02:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.663 09:02:28 -- nvmf/common.sh@717 -- # local ip 00:28:11.663 09:02:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.663 09:02:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.663 09:02:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.663 09:02:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.663 09:02:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.663 09:02:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.663 09:02:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.663 09:02:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.663 09:02:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.663 09:02:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:11.663 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.663 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.922 nvme0n1 00:28:11.922 09:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.922 09:02:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.922 09:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.922 09:02:28 -- common/autotest_common.sh@10 -- # set +x 00:28:11.922 09:02:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.922 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.922 09:02:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.922 09:02:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.922 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.922 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:11.922 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.922 09:02:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.922 09:02:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:11.922 09:02:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.922 09:02:29 -- host/auth.sh@44 -- # digest=sha512 00:28:11.922 09:02:29 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.922 09:02:29 -- host/auth.sh@44 -- # keyid=4 00:28:11.922 09:02:29 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:11.922 09:02:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:11.922 09:02:29 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:11.922 09:02:29 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:11.922 09:02:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:28:11.922 09:02:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.922 09:02:29 -- host/auth.sh@68 -- # digest=sha512 00:28:11.922 09:02:29 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:11.922 09:02:29 -- host/auth.sh@68 -- # keyid=4 00:28:11.922 09:02:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:11.922 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.922 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:11.922 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.922 09:02:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.922 09:02:29 -- nvmf/common.sh@717 -- # local ip 00:28:11.922 09:02:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.922 09:02:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.922 09:02:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.922 09:02:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.922 09:02:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.922 09:02:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.922 09:02:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.922 09:02:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.922 09:02:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.922 09:02:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.922 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.922 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.180 nvme0n1 00:28:12.180 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.180 09:02:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.180 09:02:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.180 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.180 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.180 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.180 09:02:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.180 09:02:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.181 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.181 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.181 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.181 09:02:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.181 09:02:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.181 09:02:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:12.181 09:02:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.181 09:02:29 -- host/auth.sh@44 -- # digest=sha512 00:28:12.181 09:02:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.181 09:02:29 -- host/auth.sh@44 -- # keyid=0 00:28:12.181 09:02:29 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:12.181 09:02:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:12.181 09:02:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:12.181 09:02:29 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:12.181 09:02:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:28:12.181 09:02:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.181 09:02:29 -- host/auth.sh@68 -- # digest=sha512 00:28:12.181 09:02:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:12.181 09:02:29 -- host/auth.sh@68 -- # keyid=0 00:28:12.181 09:02:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:12.181 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.181 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.181 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.181 09:02:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.181 09:02:29 -- nvmf/common.sh@717 -- # local ip 00:28:12.181 09:02:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.181 09:02:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.181 09:02:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.181 09:02:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.181 09:02:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.181 09:02:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.181 09:02:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.181 09:02:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.181 09:02:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.181 09:02:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:12.181 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.181 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.439 nvme0n1 00:28:12.439 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.439 09:02:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.439 09:02:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.439 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.439 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.439 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.439 09:02:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.439 09:02:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.439 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.439 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.439 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.439 09:02:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.439 09:02:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:12.439 09:02:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.439 09:02:29 -- host/auth.sh@44 -- # digest=sha512 00:28:12.439 09:02:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.439 09:02:29 -- host/auth.sh@44 -- # keyid=1 00:28:12.439 09:02:29 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:12.439 09:02:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:12.439 09:02:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:12.439 09:02:29 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:12.439 09:02:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:28:12.439 09:02:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.439 09:02:29 -- host/auth.sh@68 -- # digest=sha512 00:28:12.439 09:02:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:12.439 09:02:29 -- host/auth.sh@68 -- # keyid=1 00:28:12.439 09:02:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:12.439 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.439 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.439 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.439 09:02:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.439 09:02:29 -- nvmf/common.sh@717 -- # local ip 00:28:12.439 09:02:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.439 09:02:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.439 09:02:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.439 09:02:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.439 09:02:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.439 09:02:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.439 09:02:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.439 09:02:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.439 09:02:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.439 09:02:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:12.439 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.439 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.697 nvme0n1 00:28:12.697 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.697 09:02:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.697 09:02:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.697 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.697 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.697 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.697 09:02:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.697 09:02:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.697 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.697 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.697 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.697 09:02:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.697 09:02:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:12.697 09:02:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.697 09:02:29 -- host/auth.sh@44 -- # digest=sha512 00:28:12.697 09:02:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.697 09:02:29 -- host/auth.sh@44 -- # keyid=2 00:28:12.697 09:02:29 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:12.697 09:02:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:12.697 09:02:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:12.697 09:02:29 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:12.697 09:02:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:28:12.697 09:02:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.697 09:02:29 -- host/auth.sh@68 -- # digest=sha512 00:28:12.697 09:02:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:12.697 09:02:29 -- host/auth.sh@68 -- # keyid=2 00:28:12.697 09:02:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:12.697 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.697 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.697 09:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.697 09:02:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.697 09:02:29 -- nvmf/common.sh@717 -- # local ip 00:28:12.698 09:02:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.698 09:02:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.698 09:02:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.698 09:02:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.698 09:02:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.698 09:02:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.698 09:02:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.698 09:02:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.698 09:02:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.698 09:02:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:12.698 09:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.698 09:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:12.962 nvme0n1 00:28:12.963 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.963 09:02:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.963 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.963 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:12.963 09:02:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.963 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.963 09:02:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.963 09:02:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.963 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.963 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:13.223 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.223 09:02:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.223 09:02:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:13.223 09:02:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.223 09:02:30 -- host/auth.sh@44 -- # digest=sha512 00:28:13.223 09:02:30 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.223 09:02:30 -- host/auth.sh@44 -- # keyid=3 00:28:13.223 09:02:30 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:13.223 09:02:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:13.223 09:02:30 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:13.223 09:02:30 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:13.223 09:02:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:28:13.223 09:02:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.223 09:02:30 -- host/auth.sh@68 -- # digest=sha512 00:28:13.223 09:02:30 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:13.223 09:02:30 -- host/auth.sh@68 -- # keyid=3 00:28:13.223 09:02:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:13.223 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.223 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:13.223 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.223 09:02:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.223 09:02:30 -- nvmf/common.sh@717 -- # local ip 00:28:13.223 09:02:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.223 09:02:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.223 09:02:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.223 09:02:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.223 09:02:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.223 09:02:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.223 09:02:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.223 09:02:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.223 09:02:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.223 09:02:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:13.223 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.223 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:13.481 nvme0n1 00:28:13.481 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.481 09:02:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.481 09:02:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.481 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.481 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:13.481 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.481 09:02:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.481 09:02:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.481 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.481 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:13.481 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.481 09:02:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.481 09:02:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:13.481 09:02:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.481 09:02:30 -- host/auth.sh@44 -- # digest=sha512 00:28:13.481 09:02:30 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.481 09:02:30 -- host/auth.sh@44 -- # keyid=4 00:28:13.481 09:02:30 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:13.481 09:02:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:13.481 09:02:30 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:13.481 09:02:30 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:13.481 09:02:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:28:13.481 09:02:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.481 09:02:30 -- host/auth.sh@68 -- # digest=sha512 00:28:13.481 09:02:30 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:13.481 09:02:30 -- host/auth.sh@68 -- # keyid=4 00:28:13.481 09:02:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:13.481 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.481 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:13.481 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.481 09:02:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.481 09:02:30 -- nvmf/common.sh@717 -- # local ip 00:28:13.481 09:02:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.481 09:02:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.481 09:02:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.481 09:02:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.481 09:02:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.481 09:02:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.481 09:02:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.481 09:02:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.481 09:02:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.481 09:02:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.481 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.481 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:13.738 nvme0n1 00:28:13.738 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.738 09:02:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.738 09:02:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.738 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.738 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:13.738 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.738 09:02:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.738 09:02:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.738 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.738 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:13.738 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.738 09:02:30 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.738 09:02:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.738 09:02:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:13.738 09:02:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.738 09:02:30 -- host/auth.sh@44 -- # digest=sha512 00:28:13.738 09:02:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.738 09:02:30 -- host/auth.sh@44 -- # keyid=0 00:28:13.739 09:02:30 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:13.739 09:02:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:13.739 09:02:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:13.739 09:02:30 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:13.739 09:02:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:28:13.739 09:02:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.739 09:02:30 -- host/auth.sh@68 -- # digest=sha512 00:28:13.739 09:02:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:13.739 09:02:30 -- host/auth.sh@68 -- # keyid=0 00:28:13.739 09:02:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.739 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.739 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:13.739 09:02:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.739 09:02:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.739 09:02:30 -- nvmf/common.sh@717 -- # local ip 00:28:13.739 09:02:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.739 09:02:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.739 09:02:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.739 09:02:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.739 09:02:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.739 09:02:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.739 09:02:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.739 09:02:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.739 09:02:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.739 09:02:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:13.739 09:02:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.739 09:02:30 -- common/autotest_common.sh@10 -- # set +x 00:28:14.304 nvme0n1 00:28:14.304 09:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.304 09:02:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.304 09:02:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:14.304 09:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.304 09:02:31 -- common/autotest_common.sh@10 -- # set +x 00:28:14.304 09:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.304 09:02:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.304 09:02:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.304 09:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.304 09:02:31 -- common/autotest_common.sh@10 -- # set +x 00:28:14.304 09:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.304 09:02:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:14.304 09:02:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:14.304 09:02:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:14.304 09:02:31 -- host/auth.sh@44 -- # digest=sha512 00:28:14.304 09:02:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.304 09:02:31 -- host/auth.sh@44 -- # keyid=1 00:28:14.304 09:02:31 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:14.304 09:02:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:14.304 09:02:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:14.304 09:02:31 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:14.304 09:02:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:28:14.304 09:02:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:14.304 09:02:31 -- host/auth.sh@68 -- # digest=sha512 00:28:14.304 09:02:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:14.304 09:02:31 -- host/auth.sh@68 -- # keyid=1 00:28:14.304 09:02:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.304 09:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.304 09:02:31 -- common/autotest_common.sh@10 -- # set +x 00:28:14.304 09:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.304 09:02:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:14.304 09:02:31 -- nvmf/common.sh@717 -- # local ip 00:28:14.304 09:02:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:14.304 09:02:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:14.304 09:02:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.304 09:02:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.304 09:02:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:14.304 09:02:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.304 09:02:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:14.304 09:02:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:14.304 09:02:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:14.304 09:02:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:14.304 09:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.304 09:02:31 -- common/autotest_common.sh@10 -- # set +x 00:28:14.561 nvme0n1 00:28:14.561 09:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.561 09:02:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:14.561 09:02:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.561 09:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.561 09:02:31 -- common/autotest_common.sh@10 -- # set +x 00:28:14.561 09:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.561 09:02:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.561 09:02:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.561 09:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.561 09:02:31 -- common/autotest_common.sh@10 -- # set +x 00:28:14.561 09:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.561 09:02:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:14.561 09:02:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:14.561 09:02:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:14.561 09:02:31 -- host/auth.sh@44 -- # digest=sha512 00:28:14.561 09:02:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.561 09:02:31 -- host/auth.sh@44 -- # keyid=2 00:28:14.561 09:02:31 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:14.561 09:02:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:14.561 09:02:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:14.561 09:02:31 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:14.561 09:02:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:28:14.561 09:02:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:14.561 09:02:31 -- host/auth.sh@68 -- # digest=sha512 00:28:14.561 09:02:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:14.561 09:02:31 -- host/auth.sh@68 -- # keyid=2 00:28:14.561 09:02:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.561 09:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.561 09:02:31 -- common/autotest_common.sh@10 -- # set +x 00:28:14.562 09:02:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.562 09:02:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:14.562 09:02:31 -- nvmf/common.sh@717 -- # local ip 00:28:14.562 09:02:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:14.562 09:02:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:14.562 09:02:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.562 09:02:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.562 09:02:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:14.562 09:02:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.562 09:02:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:14.562 09:02:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:14.562 09:02:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:14.562 09:02:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:14.562 09:02:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.562 09:02:31 -- common/autotest_common.sh@10 -- # set +x 00:28:15.127 nvme0n1 00:28:15.127 09:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.127 09:02:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:15.127 09:02:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.127 09:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.127 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:28:15.127 09:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.127 09:02:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.127 09:02:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.127 09:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.127 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:28:15.127 09:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.127 09:02:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:15.127 09:02:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:15.127 09:02:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:15.127 09:02:32 -- host/auth.sh@44 -- # digest=sha512 00:28:15.127 09:02:32 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.127 09:02:32 -- host/auth.sh@44 -- # keyid=3 00:28:15.127 09:02:32 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:15.127 09:02:32 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:15.127 09:02:32 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:15.127 09:02:32 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:15.127 09:02:32 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:28:15.127 09:02:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:15.127 09:02:32 -- host/auth.sh@68 -- # digest=sha512 00:28:15.127 09:02:32 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:15.127 09:02:32 -- host/auth.sh@68 -- # keyid=3 00:28:15.127 09:02:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:15.127 09:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.127 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:28:15.127 09:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.127 09:02:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:15.127 09:02:32 -- nvmf/common.sh@717 -- # local ip 00:28:15.127 09:02:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.127 09:02:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.127 09:02:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.127 09:02:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.127 09:02:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.127 09:02:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.127 09:02:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.127 09:02:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.127 09:02:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.127 09:02:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:15.127 09:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.127 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:28:15.386 nvme0n1 00:28:15.386 09:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.386 09:02:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.386 09:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.386 09:02:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:15.386 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:28:15.386 09:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.386 09:02:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.386 09:02:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.386 09:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.386 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:28:15.386 09:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.386 09:02:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:15.386 09:02:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:15.386 09:02:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:15.386 09:02:32 -- host/auth.sh@44 -- # digest=sha512 00:28:15.386 09:02:32 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.386 09:02:32 -- host/auth.sh@44 -- # keyid=4 00:28:15.386 09:02:32 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:15.386 09:02:32 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:15.386 09:02:32 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:15.386 09:02:32 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:15.386 09:02:32 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:28:15.386 09:02:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:15.386 09:02:32 -- host/auth.sh@68 -- # digest=sha512 00:28:15.386 09:02:32 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:15.386 09:02:32 -- host/auth.sh@68 -- # keyid=4 00:28:15.386 09:02:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:15.386 09:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.386 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:28:15.386 09:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.386 09:02:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:15.386 09:02:32 -- nvmf/common.sh@717 -- # local ip 00:28:15.386 09:02:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.386 09:02:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.386 09:02:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.386 09:02:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.386 09:02:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.386 09:02:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.386 09:02:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.386 09:02:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.386 09:02:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.386 09:02:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.386 09:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.386 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:28:15.948 nvme0n1 00:28:15.948 09:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.948 09:02:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.948 09:02:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:15.948 09:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.948 09:02:32 -- common/autotest_common.sh@10 -- # set +x 00:28:15.948 09:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.948 09:02:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.948 09:02:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.948 09:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.948 09:02:33 -- common/autotest_common.sh@10 -- # set +x 00:28:15.948 09:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.948 09:02:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.948 09:02:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:15.948 09:02:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:15.948 09:02:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:15.948 09:02:33 -- host/auth.sh@44 -- # digest=sha512 00:28:15.948 09:02:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.948 09:02:33 -- host/auth.sh@44 -- # keyid=0 00:28:15.948 09:02:33 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:15.948 09:02:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:15.948 09:02:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:15.948 09:02:33 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2VlMzA4NDA1MTY0NjZmN2IyNjYxMWQxZjk0ZThmNGbtld3p: 00:28:15.948 09:02:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:28:15.948 09:02:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:15.948 09:02:33 -- host/auth.sh@68 -- # digest=sha512 00:28:15.948 09:02:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:15.948 09:02:33 -- host/auth.sh@68 -- # keyid=0 00:28:15.948 09:02:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.948 09:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.948 09:02:33 -- common/autotest_common.sh@10 -- # set +x 00:28:15.948 09:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.948 09:02:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:15.948 09:02:33 -- nvmf/common.sh@717 -- # local ip 00:28:15.948 09:02:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.948 09:02:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.948 09:02:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.948 09:02:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.948 09:02:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.948 09:02:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.948 09:02:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.948 09:02:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.949 09:02:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.949 09:02:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:15.949 09:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.949 09:02:33 -- common/autotest_common.sh@10 -- # set +x 00:28:16.513 nvme0n1 00:28:16.514 09:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.514 09:02:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.514 09:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.514 09:02:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:16.514 09:02:33 -- common/autotest_common.sh@10 -- # set +x 00:28:16.514 09:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.514 09:02:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.514 09:02:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.514 09:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.514 09:02:33 -- common/autotest_common.sh@10 -- # set +x 00:28:16.514 09:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.514 09:02:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:16.514 09:02:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:16.514 09:02:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:16.514 09:02:33 -- host/auth.sh@44 -- # digest=sha512 00:28:16.514 09:02:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.514 09:02:33 -- host/auth.sh@44 -- # keyid=1 00:28:16.514 09:02:33 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:16.514 09:02:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:16.514 09:02:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:16.514 09:02:33 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:16.514 09:02:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:28:16.514 09:02:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:16.514 09:02:33 -- host/auth.sh@68 -- # digest=sha512 00:28:16.514 09:02:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:16.514 09:02:33 -- host/auth.sh@68 -- # keyid=1 00:28:16.514 09:02:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:16.514 09:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.514 09:02:33 -- common/autotest_common.sh@10 -- # set +x 00:28:16.514 09:02:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.514 09:02:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:16.514 09:02:33 -- nvmf/common.sh@717 -- # local ip 00:28:16.514 09:02:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:16.514 09:02:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:16.514 09:02:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.514 09:02:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.514 09:02:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:16.514 09:02:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.514 09:02:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:16.514 09:02:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:16.514 09:02:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:16.514 09:02:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:16.514 09:02:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.514 09:02:33 -- common/autotest_common.sh@10 -- # set +x 00:28:17.080 nvme0n1 00:28:17.080 09:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.080 09:02:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.080 09:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.080 09:02:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:17.080 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:28:17.080 09:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.080 09:02:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.080 09:02:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.080 09:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.080 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:28:17.080 09:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.080 09:02:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:17.080 09:02:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:17.080 09:02:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:17.080 09:02:34 -- host/auth.sh@44 -- # digest=sha512 00:28:17.080 09:02:34 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.080 09:02:34 -- host/auth.sh@44 -- # keyid=2 00:28:17.080 09:02:34 -- host/auth.sh@45 -- # key=DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:17.080 09:02:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:17.080 09:02:34 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:17.080 09:02:34 -- host/auth.sh@49 -- # echo DHHC-1:01:ZGRkYTM1NDYyMmMxNmUzOTg4NmZiZmQzYjgzNTg0MmQkJU9p: 00:28:17.080 09:02:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:28:17.080 09:02:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:17.080 09:02:34 -- host/auth.sh@68 -- # digest=sha512 00:28:17.080 09:02:34 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:17.080 09:02:34 -- host/auth.sh@68 -- # keyid=2 00:28:17.080 09:02:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:17.080 09:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.080 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:28:17.080 09:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.080 09:02:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:17.080 09:02:34 -- nvmf/common.sh@717 -- # local ip 00:28:17.080 09:02:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:17.080 09:02:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:17.080 09:02:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.080 09:02:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.080 09:02:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:17.080 09:02:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.080 09:02:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:17.080 09:02:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:17.080 09:02:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:17.339 09:02:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:17.339 09:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.339 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:28:17.906 nvme0n1 00:28:17.906 09:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.906 09:02:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.906 09:02:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:17.906 09:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.906 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:28:17.906 09:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.906 09:02:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.906 09:02:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.906 09:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.906 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:28:17.906 09:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.906 09:02:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:17.906 09:02:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:17.906 09:02:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:17.906 09:02:34 -- host/auth.sh@44 -- # digest=sha512 00:28:17.906 09:02:34 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.906 09:02:34 -- host/auth.sh@44 -- # keyid=3 00:28:17.906 09:02:34 -- host/auth.sh@45 -- # key=DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:17.906 09:02:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:17.906 09:02:34 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:17.906 09:02:34 -- host/auth.sh@49 -- # echo DHHC-1:02:YjU4MmFjMzJiZGNlYTQwOTY3OGU1NTU5M2E0MGVlOTdmNTg4ZDQ4ODhiOWNhYzQ2a4zbCA==: 00:28:17.906 09:02:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:28:17.906 09:02:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:17.906 09:02:34 -- host/auth.sh@68 -- # digest=sha512 00:28:17.906 09:02:34 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:17.906 09:02:34 -- host/auth.sh@68 -- # keyid=3 00:28:17.906 09:02:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:17.906 09:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.906 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:28:17.906 09:02:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.906 09:02:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:17.906 09:02:34 -- nvmf/common.sh@717 -- # local ip 00:28:17.906 09:02:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:17.906 09:02:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:17.906 09:02:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.906 09:02:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.906 09:02:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:17.906 09:02:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.906 09:02:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:17.906 09:02:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:17.906 09:02:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:17.906 09:02:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:17.906 09:02:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.906 09:02:34 -- common/autotest_common.sh@10 -- # set +x 00:28:18.473 nvme0n1 00:28:18.473 09:02:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.473 09:02:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.473 09:02:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.473 09:02:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:18.473 09:02:35 -- common/autotest_common.sh@10 -- # set +x 00:28:18.473 09:02:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.473 09:02:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.474 09:02:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.474 09:02:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.474 09:02:35 -- common/autotest_common.sh@10 -- # set +x 00:28:18.474 09:02:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.474 09:02:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:18.474 09:02:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:18.474 09:02:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:18.474 09:02:35 -- host/auth.sh@44 -- # digest=sha512 00:28:18.474 09:02:35 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.474 09:02:35 -- host/auth.sh@44 -- # keyid=4 00:28:18.474 09:02:35 -- host/auth.sh@45 -- # key=DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:18.474 09:02:35 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:18.474 09:02:35 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:18.474 09:02:35 -- host/auth.sh@49 -- # echo DHHC-1:03:NjY5YWM0NzgzNWE2Nzc2Mjg3ZTBiYzIxODk3YjYyNjU5Mzg0OGQ4NzNmOTI3OGMyZDllNWYwMGQ5ZjJlNmRmYQx7QB0=: 00:28:18.474 09:02:35 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:28:18.474 09:02:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:18.474 09:02:35 -- host/auth.sh@68 -- # digest=sha512 00:28:18.474 09:02:35 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:18.474 09:02:35 -- host/auth.sh@68 -- # keyid=4 00:28:18.474 09:02:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:18.474 09:02:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.474 09:02:35 -- common/autotest_common.sh@10 -- # set +x 00:28:18.474 09:02:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.474 09:02:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:18.474 09:02:35 -- nvmf/common.sh@717 -- # local ip 00:28:18.474 09:02:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:18.474 09:02:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:18.474 09:02:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.474 09:02:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.474 09:02:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:18.474 09:02:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.474 09:02:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:18.474 09:02:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:18.474 09:02:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:18.474 09:02:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.474 09:02:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.474 09:02:35 -- common/autotest_common.sh@10 -- # set +x 00:28:19.040 nvme0n1 00:28:19.040 09:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.040 09:02:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.040 09:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.040 09:02:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:19.040 09:02:36 -- common/autotest_common.sh@10 -- # set +x 00:28:19.040 09:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.040 09:02:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.040 09:02:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.040 09:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.040 09:02:36 -- common/autotest_common.sh@10 -- # set +x 00:28:19.040 09:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.040 09:02:36 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:19.040 09:02:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:19.040 09:02:36 -- host/auth.sh@44 -- # digest=sha256 00:28:19.040 09:02:36 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.040 09:02:36 -- host/auth.sh@44 -- # keyid=1 00:28:19.040 09:02:36 -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:19.040 09:02:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:19.040 09:02:36 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:19.040 09:02:36 -- host/auth.sh@49 -- # echo DHHC-1:00:M2ZjNzAwNTQwZTVmNjM5ZTY0YTkxOTk5YjE0MDhmY2JlMzFhNGJmYmNiN2I0ZGE1OUdpJA==: 00:28:19.040 09:02:36 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:19.040 09:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.040 09:02:36 -- common/autotest_common.sh@10 -- # set +x 00:28:19.040 09:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.040 09:02:36 -- host/auth.sh@119 -- # get_main_ns_ip 00:28:19.040 09:02:36 -- nvmf/common.sh@717 -- # local ip 00:28:19.040 09:02:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:19.040 09:02:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:19.040 09:02:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.040 09:02:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.040 09:02:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:19.040 09:02:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.040 09:02:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:19.040 09:02:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:19.040 09:02:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:19.040 09:02:36 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:19.040 09:02:36 -- common/autotest_common.sh@638 -- # local es=0 00:28:19.040 09:02:36 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:19.040 09:02:36 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:19.040 09:02:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:19.040 09:02:36 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:19.040 09:02:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:19.040 09:02:36 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:19.040 09:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.040 09:02:36 -- common/autotest_common.sh@10 -- # set +x 00:28:19.040 request: 00:28:19.040 { 00:28:19.040 "name": "nvme0", 00:28:19.040 "trtype": "tcp", 00:28:19.040 "traddr": "10.0.0.1", 00:28:19.040 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:19.040 "adrfam": "ipv4", 00:28:19.040 "trsvcid": "4420", 00:28:19.040 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:19.040 "method": "bdev_nvme_attach_controller", 00:28:19.040 "req_id": 1 00:28:19.040 } 00:28:19.040 Got JSON-RPC error response 00:28:19.040 response: 00:28:19.040 { 00:28:19.040 "code": -32602, 00:28:19.040 "message": "Invalid parameters" 00:28:19.040 } 00:28:19.040 09:02:36 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:19.040 09:02:36 -- common/autotest_common.sh@641 -- # es=1 00:28:19.040 09:02:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:19.040 09:02:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:19.040 09:02:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:19.040 09:02:36 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.040 09:02:36 -- host/auth.sh@121 -- # jq length 00:28:19.040 09:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.040 09:02:36 -- common/autotest_common.sh@10 -- # set +x 00:28:19.040 09:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.304 09:02:36 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:28:19.304 09:02:36 -- host/auth.sh@124 -- # get_main_ns_ip 00:28:19.304 09:02:36 -- nvmf/common.sh@717 -- # local ip 00:28:19.304 09:02:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:19.304 09:02:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:19.304 09:02:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.304 09:02:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.304 09:02:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:19.304 09:02:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.304 09:02:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:19.304 09:02:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:19.304 09:02:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:19.304 09:02:36 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:19.304 09:02:36 -- common/autotest_common.sh@638 -- # local es=0 00:28:19.304 09:02:36 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:19.304 09:02:36 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:19.304 09:02:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:19.304 09:02:36 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:19.304 09:02:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:19.304 09:02:36 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:19.304 09:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.304 09:02:36 -- common/autotest_common.sh@10 -- # set +x 00:28:19.304 request: 00:28:19.304 { 00:28:19.304 "name": "nvme0", 00:28:19.304 "trtype": "tcp", 00:28:19.304 "traddr": "10.0.0.1", 00:28:19.304 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:19.304 "adrfam": "ipv4", 00:28:19.304 "trsvcid": "4420", 00:28:19.304 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:19.304 "dhchap_key": "key2", 00:28:19.304 "method": "bdev_nvme_attach_controller", 00:28:19.304 "req_id": 1 00:28:19.304 } 00:28:19.304 Got JSON-RPC error response 00:28:19.304 response: 00:28:19.304 { 00:28:19.304 "code": -32602, 00:28:19.304 "message": "Invalid parameters" 00:28:19.304 } 00:28:19.304 09:02:36 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:19.304 09:02:36 -- common/autotest_common.sh@641 -- # es=1 00:28:19.304 09:02:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:19.304 09:02:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:19.304 09:02:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:19.304 09:02:36 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.304 09:02:36 -- host/auth.sh@127 -- # jq length 00:28:19.304 09:02:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.304 09:02:36 -- common/autotest_common.sh@10 -- # set +x 00:28:19.304 09:02:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.304 09:02:36 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:28:19.304 09:02:36 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:28:19.304 09:02:36 -- host/auth.sh@130 -- # cleanup 00:28:19.304 09:02:36 -- host/auth.sh@24 -- # nvmftestfini 00:28:19.304 09:02:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:19.304 09:02:36 -- nvmf/common.sh@117 -- # sync 00:28:19.304 09:02:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:19.304 09:02:36 -- nvmf/common.sh@120 -- # set +e 00:28:19.304 09:02:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:19.304 09:02:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:19.304 rmmod nvme_tcp 00:28:19.304 rmmod nvme_fabrics 00:28:19.304 09:02:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:19.304 09:02:36 -- nvmf/common.sh@124 -- # set -e 00:28:19.304 09:02:36 -- nvmf/common.sh@125 -- # return 0 00:28:19.304 09:02:36 -- nvmf/common.sh@478 -- # '[' -n 2206476 ']' 00:28:19.304 09:02:36 -- nvmf/common.sh@479 -- # killprocess 2206476 00:28:19.304 09:02:36 -- common/autotest_common.sh@936 -- # '[' -z 2206476 ']' 00:28:19.304 09:02:36 -- common/autotest_common.sh@940 -- # kill -0 2206476 00:28:19.304 09:02:36 -- common/autotest_common.sh@941 -- # uname 00:28:19.304 09:02:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:19.304 09:02:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2206476 00:28:19.563 09:02:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:19.563 09:02:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:19.563 09:02:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2206476' 00:28:19.563 killing process with pid 2206476 00:28:19.563 09:02:36 -- common/autotest_common.sh@955 -- # kill 2206476 00:28:19.563 09:02:36 -- common/autotest_common.sh@960 -- # wait 2206476 00:28:19.563 09:02:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:19.563 09:02:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:19.563 09:02:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:19.563 09:02:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:19.563 09:02:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:19.563 09:02:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.563 09:02:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:19.563 09:02:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.104 09:02:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:22.104 09:02:38 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:22.104 09:02:38 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:22.104 09:02:38 -- host/auth.sh@27 -- # clean_kernel_target 00:28:22.104 09:02:38 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:22.104 09:02:38 -- nvmf/common.sh@675 -- # echo 0 00:28:22.104 09:02:38 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:22.104 09:02:38 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:22.104 09:02:38 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:22.104 09:02:38 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:22.104 09:02:38 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:22.104 09:02:38 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:22.104 09:02:38 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:25.390 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:25.390 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:26.769 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:28:27.028 09:02:44 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.vF3 /tmp/spdk.key-null.oDn /tmp/spdk.key-sha256.6W9 /tmp/spdk.key-sha384.zzI /tmp/spdk.key-sha512.Tpb /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:27.028 09:02:44 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:30.313 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:30.313 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:30.313 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:30.313 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:30.313 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:30.313 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:30.313 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:30.313 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:30.314 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:30.314 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:30.314 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:30.314 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:30.314 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:30.314 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:30.314 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:30.314 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:30.314 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:30.314 00:28:30.314 real 0m52.596s 00:28:30.314 user 0m44.272s 00:28:30.314 sys 0m15.116s 00:28:30.314 09:02:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:30.314 09:02:47 -- common/autotest_common.sh@10 -- # set +x 00:28:30.314 ************************************ 00:28:30.314 END TEST nvmf_auth 00:28:30.314 ************************************ 00:28:30.572 09:02:47 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:28:30.572 09:02:47 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:30.572 09:02:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:30.572 09:02:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:30.572 09:02:47 -- common/autotest_common.sh@10 -- # set +x 00:28:30.572 ************************************ 00:28:30.572 START TEST nvmf_digest 00:28:30.572 ************************************ 00:28:30.572 09:02:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:30.831 * Looking for test storage... 00:28:30.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.831 09:02:47 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.831 09:02:47 -- nvmf/common.sh@7 -- # uname -s 00:28:30.831 09:02:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.831 09:02:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.831 09:02:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.831 09:02:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.831 09:02:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.831 09:02:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.831 09:02:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.831 09:02:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.831 09:02:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.831 09:02:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.831 09:02:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:30.831 09:02:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:30.831 09:02:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.831 09:02:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.831 09:02:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.831 09:02:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.831 09:02:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.831 09:02:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.831 09:02:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.831 09:02:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.831 09:02:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.831 09:02:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.831 09:02:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.831 09:02:47 -- paths/export.sh@5 -- # export PATH 00:28:30.831 09:02:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.831 09:02:47 -- nvmf/common.sh@47 -- # : 0 00:28:30.831 09:02:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.831 09:02:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.831 09:02:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.831 09:02:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.831 09:02:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.831 09:02:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.831 09:02:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.831 09:02:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.831 09:02:47 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:30.831 09:02:47 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:30.831 09:02:47 -- host/digest.sh@16 -- # runtime=2 00:28:30.831 09:02:47 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:30.831 09:02:47 -- host/digest.sh@138 -- # nvmftestinit 00:28:30.831 09:02:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:30.831 09:02:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.831 09:02:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:30.831 09:02:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:30.831 09:02:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:30.831 09:02:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.831 09:02:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.831 09:02:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.831 09:02:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:30.831 09:02:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:30.831 09:02:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.831 09:02:47 -- common/autotest_common.sh@10 -- # set +x 00:28:37.394 09:02:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:37.394 09:02:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:37.394 09:02:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:37.394 09:02:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:37.394 09:02:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:37.394 09:02:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:37.394 09:02:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:37.394 09:02:54 -- nvmf/common.sh@295 -- # net_devs=() 00:28:37.394 09:02:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:37.394 09:02:54 -- nvmf/common.sh@296 -- # e810=() 00:28:37.394 09:02:54 -- nvmf/common.sh@296 -- # local -ga e810 00:28:37.394 09:02:54 -- nvmf/common.sh@297 -- # x722=() 00:28:37.394 09:02:54 -- nvmf/common.sh@297 -- # local -ga x722 00:28:37.394 09:02:54 -- nvmf/common.sh@298 -- # mlx=() 00:28:37.394 09:02:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:37.394 09:02:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.394 09:02:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.394 09:02:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.394 09:02:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.394 09:02:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.394 09:02:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.394 09:02:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.395 09:02:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.395 09:02:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.395 09:02:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.395 09:02:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.395 09:02:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:37.395 09:02:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:37.395 09:02:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:37.395 09:02:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.395 09:02:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:37.395 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:37.395 09:02:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.395 09:02:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:37.395 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:37.395 09:02:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:37.395 09:02:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.395 09:02:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.395 09:02:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:37.395 09:02:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.395 09:02:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:37.395 Found net devices under 0000:af:00.0: cvl_0_0 00:28:37.395 09:02:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.395 09:02:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.395 09:02:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.395 09:02:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:37.395 09:02:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.395 09:02:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:37.395 Found net devices under 0000:af:00.1: cvl_0_1 00:28:37.395 09:02:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.395 09:02:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:37.395 09:02:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:37.395 09:02:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:37.395 09:02:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:37.395 09:02:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.395 09:02:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.395 09:02:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.395 09:02:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:37.395 09:02:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.395 09:02:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.395 09:02:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:37.395 09:02:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.395 09:02:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.395 09:02:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:37.395 09:02:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:37.395 09:02:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.395 09:02:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.395 09:02:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.395 09:02:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.653 09:02:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:37.653 09:02:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.653 09:02:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.653 09:02:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.653 09:02:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:37.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:28:37.653 00:28:37.653 --- 10.0.0.2 ping statistics --- 00:28:37.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.653 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:28:37.653 09:02:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:28:37.653 00:28:37.653 --- 10.0.0.1 ping statistics --- 00:28:37.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.653 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:28:37.653 09:02:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.653 09:02:54 -- nvmf/common.sh@411 -- # return 0 00:28:37.653 09:02:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:37.653 09:02:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.653 09:02:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:37.653 09:02:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:37.653 09:02:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.653 09:02:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:37.653 09:02:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:37.653 09:02:54 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:37.654 09:02:54 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:37.654 09:02:54 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:37.654 09:02:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:37.654 09:02:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:37.654 09:02:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.912 ************************************ 00:28:37.912 START TEST nvmf_digest_clean 00:28:37.912 ************************************ 00:28:37.912 09:02:54 -- common/autotest_common.sh@1111 -- # run_digest 00:28:37.912 09:02:54 -- host/digest.sh@120 -- # local dsa_initiator 00:28:37.912 09:02:54 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:37.912 09:02:54 -- host/digest.sh@121 -- # dsa_initiator=false 00:28:37.912 09:02:54 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:37.912 09:02:54 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:37.912 09:02:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:37.912 09:02:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:37.912 09:02:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.912 09:02:54 -- nvmf/common.sh@470 -- # nvmfpid=2220377 00:28:37.912 09:02:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:37.912 09:02:54 -- nvmf/common.sh@471 -- # waitforlisten 2220377 00:28:37.912 09:02:54 -- common/autotest_common.sh@817 -- # '[' -z 2220377 ']' 00:28:37.912 09:02:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.912 09:02:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:37.912 09:02:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.912 09:02:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:37.912 09:02:54 -- common/autotest_common.sh@10 -- # set +x 00:28:37.912 [2024-04-26 09:02:55.025427] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:28:37.912 [2024-04-26 09:02:55.025476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.912 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.912 [2024-04-26 09:02:55.099038] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.172 [2024-04-26 09:02:55.173806] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.172 [2024-04-26 09:02:55.173837] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.172 [2024-04-26 09:02:55.173847] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.172 [2024-04-26 09:02:55.173856] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.172 [2024-04-26 09:02:55.173863] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.172 [2024-04-26 09:02:55.173891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.736 09:02:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:38.736 09:02:55 -- common/autotest_common.sh@850 -- # return 0 00:28:38.736 09:02:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:38.736 09:02:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:38.736 09:02:55 -- common/autotest_common.sh@10 -- # set +x 00:28:38.736 09:02:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.736 09:02:55 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:38.736 09:02:55 -- host/digest.sh@126 -- # common_target_config 00:28:38.736 09:02:55 -- host/digest.sh@43 -- # rpc_cmd 00:28:38.736 09:02:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:38.736 09:02:55 -- common/autotest_common.sh@10 -- # set +x 00:28:38.736 null0 00:28:38.736 [2024-04-26 09:02:55.957100] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.736 [2024-04-26 09:02:55.981337] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.995 09:02:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:38.995 09:02:55 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:38.995 09:02:55 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:38.995 09:02:55 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:38.995 09:02:55 -- host/digest.sh@80 -- # rw=randread 00:28:38.995 09:02:55 -- host/digest.sh@80 -- # bs=4096 00:28:38.995 09:02:55 -- host/digest.sh@80 -- # qd=128 00:28:38.995 09:02:55 -- host/digest.sh@80 -- # scan_dsa=false 00:28:38.995 09:02:55 -- host/digest.sh@83 -- # bperfpid=2220650 00:28:38.995 09:02:55 -- host/digest.sh@84 -- # waitforlisten 2220650 /var/tmp/bperf.sock 00:28:38.995 09:02:55 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:38.995 09:02:55 -- common/autotest_common.sh@817 -- # '[' -z 2220650 ']' 00:28:38.995 09:02:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:38.995 09:02:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:38.995 09:02:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:38.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:38.995 09:02:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:38.995 09:02:55 -- common/autotest_common.sh@10 -- # set +x 00:28:38.995 [2024-04-26 09:02:56.035017] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:28:38.995 [2024-04-26 09:02:56.035067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220650 ] 00:28:38.995 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.995 [2024-04-26 09:02:56.104309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.995 [2024-04-26 09:02:56.177473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.926 09:02:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:39.926 09:02:56 -- common/autotest_common.sh@850 -- # return 0 00:28:39.926 09:02:56 -- host/digest.sh@86 -- # false 00:28:39.926 09:02:56 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:39.926 09:02:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:39.926 09:02:57 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.926 09:02:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.184 nvme0n1 00:28:40.441 09:02:57 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:40.441 09:02:57 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.441 Running I/O for 2 seconds... 00:28:42.337 00:28:42.337 Latency(us) 00:28:42.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.337 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:42.337 nvme0n1 : 2.00 24807.39 96.90 0.00 0.00 5154.06 2202.01 37958.45 00:28:42.337 =================================================================================================================== 00:28:42.337 Total : 24807.39 96.90 0.00 0.00 5154.06 2202.01 37958.45 00:28:42.337 0 00:28:42.337 09:02:59 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:42.337 09:02:59 -- host/digest.sh@93 -- # get_accel_stats 00:28:42.337 09:02:59 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:42.337 09:02:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:42.337 09:02:59 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:42.338 | select(.opcode=="crc32c") 00:28:42.338 | "\(.module_name) \(.executed)"' 00:28:42.595 09:02:59 -- host/digest.sh@94 -- # false 00:28:42.595 09:02:59 -- host/digest.sh@94 -- # exp_module=software 00:28:42.595 09:02:59 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:42.595 09:02:59 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:42.595 09:02:59 -- host/digest.sh@98 -- # killprocess 2220650 00:28:42.595 09:02:59 -- common/autotest_common.sh@936 -- # '[' -z 2220650 ']' 00:28:42.595 09:02:59 -- common/autotest_common.sh@940 -- # kill -0 2220650 00:28:42.595 09:02:59 -- common/autotest_common.sh@941 -- # uname 00:28:42.595 09:02:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:42.595 09:02:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2220650 00:28:42.595 09:02:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:42.595 09:02:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:42.595 09:02:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2220650' 00:28:42.595 killing process with pid 2220650 00:28:42.595 09:02:59 -- common/autotest_common.sh@955 -- # kill 2220650 00:28:42.595 Received shutdown signal, test time was about 2.000000 seconds 00:28:42.595 00:28:42.595 Latency(us) 00:28:42.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.595 =================================================================================================================== 00:28:42.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.595 09:02:59 -- common/autotest_common.sh@960 -- # wait 2220650 00:28:42.853 09:02:59 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:42.853 09:02:59 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:42.853 09:02:59 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:42.853 09:02:59 -- host/digest.sh@80 -- # rw=randread 00:28:42.853 09:02:59 -- host/digest.sh@80 -- # bs=131072 00:28:42.853 09:02:59 -- host/digest.sh@80 -- # qd=16 00:28:42.853 09:02:59 -- host/digest.sh@80 -- # scan_dsa=false 00:28:42.853 09:02:59 -- host/digest.sh@83 -- # bperfpid=2221210 00:28:42.853 09:02:59 -- host/digest.sh@84 -- # waitforlisten 2221210 /var/tmp/bperf.sock 00:28:42.853 09:02:59 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:42.853 09:02:59 -- common/autotest_common.sh@817 -- # '[' -z 2221210 ']' 00:28:42.853 09:02:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:42.853 09:02:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:42.853 09:02:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:42.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:42.853 09:02:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:42.853 09:02:59 -- common/autotest_common.sh@10 -- # set +x 00:28:42.853 [2024-04-26 09:03:00.028362] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:28:42.853 [2024-04-26 09:03:00.028421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221210 ] 00:28:42.853 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:42.853 Zero copy mechanism will not be used. 00:28:42.853 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.111 [2024-04-26 09:03:00.101643] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.111 [2024-04-26 09:03:00.173794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.675 09:03:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:43.675 09:03:00 -- common/autotest_common.sh@850 -- # return 0 00:28:43.675 09:03:00 -- host/digest.sh@86 -- # false 00:28:43.675 09:03:00 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:43.675 09:03:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:43.933 09:03:01 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.933 09:03:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.191 nvme0n1 00:28:44.191 09:03:01 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:44.191 09:03:01 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.191 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.191 Zero copy mechanism will not be used. 00:28:44.191 Running I/O for 2 seconds... 00:28:46.717 00:28:46.717 Latency(us) 00:28:46.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.717 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:46.717 nvme0n1 : 2.00 2378.36 297.30 0.00 0.00 6725.78 5872.03 14050.92 00:28:46.717 =================================================================================================================== 00:28:46.717 Total : 2378.36 297.30 0.00 0.00 6725.78 5872.03 14050.92 00:28:46.717 0 00:28:46.717 09:03:03 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:46.717 09:03:03 -- host/digest.sh@93 -- # get_accel_stats 00:28:46.717 09:03:03 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:46.717 09:03:03 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:46.717 | select(.opcode=="crc32c") 00:28:46.717 | "\(.module_name) \(.executed)"' 00:28:46.717 09:03:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:46.717 09:03:03 -- host/digest.sh@94 -- # false 00:28:46.717 09:03:03 -- host/digest.sh@94 -- # exp_module=software 00:28:46.717 09:03:03 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:46.717 09:03:03 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:46.717 09:03:03 -- host/digest.sh@98 -- # killprocess 2221210 00:28:46.717 09:03:03 -- common/autotest_common.sh@936 -- # '[' -z 2221210 ']' 00:28:46.717 09:03:03 -- common/autotest_common.sh@940 -- # kill -0 2221210 00:28:46.717 09:03:03 -- common/autotest_common.sh@941 -- # uname 00:28:46.717 09:03:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:46.717 09:03:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2221210 00:28:46.717 09:03:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:46.717 09:03:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:46.717 09:03:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2221210' 00:28:46.717 killing process with pid 2221210 00:28:46.717 09:03:03 -- common/autotest_common.sh@955 -- # kill 2221210 00:28:46.717 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.717 00:28:46.717 Latency(us) 00:28:46.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.717 =================================================================================================================== 00:28:46.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.717 09:03:03 -- common/autotest_common.sh@960 -- # wait 2221210 00:28:46.717 09:03:03 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:46.717 09:03:03 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:46.717 09:03:03 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:46.717 09:03:03 -- host/digest.sh@80 -- # rw=randwrite 00:28:46.717 09:03:03 -- host/digest.sh@80 -- # bs=4096 00:28:46.717 09:03:03 -- host/digest.sh@80 -- # qd=128 00:28:46.717 09:03:03 -- host/digest.sh@80 -- # scan_dsa=false 00:28:46.717 09:03:03 -- host/digest.sh@83 -- # bperfpid=2222130 00:28:46.717 09:03:03 -- host/digest.sh@84 -- # waitforlisten 2222130 /var/tmp/bperf.sock 00:28:46.717 09:03:03 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:46.717 09:03:03 -- common/autotest_common.sh@817 -- # '[' -z 2222130 ']' 00:28:46.717 09:03:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.717 09:03:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:46.717 09:03:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.717 09:03:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:46.717 09:03:03 -- common/autotest_common.sh@10 -- # set +x 00:28:46.717 [2024-04-26 09:03:03.896535] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:28:46.717 [2024-04-26 09:03:03.896586] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222130 ] 00:28:46.717 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.974 [2024-04-26 09:03:03.967514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.974 [2024-04-26 09:03:04.039045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.538 09:03:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:47.538 09:03:04 -- common/autotest_common.sh@850 -- # return 0 00:28:47.538 09:03:04 -- host/digest.sh@86 -- # false 00:28:47.538 09:03:04 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:47.538 09:03:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:47.795 09:03:04 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.795 09:03:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:48.052 nvme0n1 00:28:48.052 09:03:05 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:48.052 09:03:05 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.052 Running I/O for 2 seconds... 00:28:50.576 00:28:50.576 Latency(us) 00:28:50.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.576 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:50.576 nvme0n1 : 2.00 27643.12 107.98 0.00 0.00 4622.26 2503.48 25165.82 00:28:50.576 =================================================================================================================== 00:28:50.576 Total : 27643.12 107.98 0.00 0.00 4622.26 2503.48 25165.82 00:28:50.576 0 00:28:50.576 09:03:07 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:50.576 09:03:07 -- host/digest.sh@93 -- # get_accel_stats 00:28:50.576 09:03:07 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:50.576 09:03:07 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:50.576 | select(.opcode=="crc32c") 00:28:50.576 | "\(.module_name) \(.executed)"' 00:28:50.576 09:03:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:50.576 09:03:07 -- host/digest.sh@94 -- # false 00:28:50.576 09:03:07 -- host/digest.sh@94 -- # exp_module=software 00:28:50.576 09:03:07 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:50.576 09:03:07 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:50.576 09:03:07 -- host/digest.sh@98 -- # killprocess 2222130 00:28:50.576 09:03:07 -- common/autotest_common.sh@936 -- # '[' -z 2222130 ']' 00:28:50.576 09:03:07 -- common/autotest_common.sh@940 -- # kill -0 2222130 00:28:50.576 09:03:07 -- common/autotest_common.sh@941 -- # uname 00:28:50.576 09:03:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:50.576 09:03:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2222130 00:28:50.576 09:03:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:50.576 09:03:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:50.576 09:03:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2222130' 00:28:50.576 killing process with pid 2222130 00:28:50.576 09:03:07 -- common/autotest_common.sh@955 -- # kill 2222130 00:28:50.576 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.576 00:28:50.576 Latency(us) 00:28:50.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.576 =================================================================================================================== 00:28:50.576 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.576 09:03:07 -- common/autotest_common.sh@960 -- # wait 2222130 00:28:50.576 09:03:07 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:50.576 09:03:07 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:50.576 09:03:07 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:50.576 09:03:07 -- host/digest.sh@80 -- # rw=randwrite 00:28:50.576 09:03:07 -- host/digest.sh@80 -- # bs=131072 00:28:50.576 09:03:07 -- host/digest.sh@80 -- # qd=16 00:28:50.576 09:03:07 -- host/digest.sh@80 -- # scan_dsa=false 00:28:50.576 09:03:07 -- host/digest.sh@83 -- # bperfpid=2223119 00:28:50.576 09:03:07 -- host/digest.sh@84 -- # waitforlisten 2223119 /var/tmp/bperf.sock 00:28:50.576 09:03:07 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:50.576 09:03:07 -- common/autotest_common.sh@817 -- # '[' -z 2223119 ']' 00:28:50.576 09:03:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:50.576 09:03:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:50.576 09:03:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:50.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:50.576 09:03:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:50.576 09:03:07 -- common/autotest_common.sh@10 -- # set +x 00:28:50.576 [2024-04-26 09:03:07.780499] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:28:50.576 [2024-04-26 09:03:07.780556] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223119 ] 00:28:50.576 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:50.576 Zero copy mechanism will not be used. 00:28:50.576 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.834 [2024-04-26 09:03:07.850801] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.834 [2024-04-26 09:03:07.914782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.405 09:03:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:51.405 09:03:08 -- common/autotest_common.sh@850 -- # return 0 00:28:51.405 09:03:08 -- host/digest.sh@86 -- # false 00:28:51.405 09:03:08 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:51.405 09:03:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:51.663 09:03:08 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.663 09:03:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.920 nvme0n1 00:28:51.920 09:03:09 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:51.920 09:03:09 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.177 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.177 Zero copy mechanism will not be used. 00:28:52.177 Running I/O for 2 seconds... 00:28:54.074 00:28:54.074 Latency(us) 00:28:54.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.074 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:54.074 nvme0n1 : 2.01 1644.15 205.52 0.00 0.00 9710.84 6920.60 37329.31 00:28:54.074 =================================================================================================================== 00:28:54.074 Total : 1644.15 205.52 0.00 0.00 9710.84 6920.60 37329.31 00:28:54.074 0 00:28:54.074 09:03:11 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:54.074 09:03:11 -- host/digest.sh@93 -- # get_accel_stats 00:28:54.074 09:03:11 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:54.074 09:03:11 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:54.074 | select(.opcode=="crc32c") 00:28:54.074 | "\(.module_name) \(.executed)"' 00:28:54.074 09:03:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:54.332 09:03:11 -- host/digest.sh@94 -- # false 00:28:54.332 09:03:11 -- host/digest.sh@94 -- # exp_module=software 00:28:54.332 09:03:11 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:54.332 09:03:11 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:54.332 09:03:11 -- host/digest.sh@98 -- # killprocess 2223119 00:28:54.332 09:03:11 -- common/autotest_common.sh@936 -- # '[' -z 2223119 ']' 00:28:54.332 09:03:11 -- common/autotest_common.sh@940 -- # kill -0 2223119 00:28:54.332 09:03:11 -- common/autotest_common.sh@941 -- # uname 00:28:54.332 09:03:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:54.332 09:03:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2223119 00:28:54.332 09:03:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:54.332 09:03:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:54.332 09:03:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2223119' 00:28:54.332 killing process with pid 2223119 00:28:54.332 09:03:11 -- common/autotest_common.sh@955 -- # kill 2223119 00:28:54.332 Received shutdown signal, test time was about 2.000000 seconds 00:28:54.332 00:28:54.332 Latency(us) 00:28:54.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.332 =================================================================================================================== 00:28:54.332 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:54.332 09:03:11 -- common/autotest_common.sh@960 -- # wait 2223119 00:28:54.589 09:03:11 -- host/digest.sh@132 -- # killprocess 2220377 00:28:54.589 09:03:11 -- common/autotest_common.sh@936 -- # '[' -z 2220377 ']' 00:28:54.589 09:03:11 -- common/autotest_common.sh@940 -- # kill -0 2220377 00:28:54.589 09:03:11 -- common/autotest_common.sh@941 -- # uname 00:28:54.589 09:03:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:54.589 09:03:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2220377 00:28:54.589 09:03:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:54.589 09:03:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:54.589 09:03:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2220377' 00:28:54.589 killing process with pid 2220377 00:28:54.589 09:03:11 -- common/autotest_common.sh@955 -- # kill 2220377 00:28:54.589 09:03:11 -- common/autotest_common.sh@960 -- # wait 2220377 00:28:54.847 00:28:54.847 real 0m16.975s 00:28:54.847 user 0m32.512s 00:28:54.847 sys 0m4.476s 00:28:54.847 09:03:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:54.847 09:03:11 -- common/autotest_common.sh@10 -- # set +x 00:28:54.847 ************************************ 00:28:54.847 END TEST nvmf_digest_clean 00:28:54.847 ************************************ 00:28:54.847 09:03:11 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:54.847 09:03:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:54.847 09:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:54.847 09:03:11 -- common/autotest_common.sh@10 -- # set +x 00:28:55.104 ************************************ 00:28:55.104 START TEST nvmf_digest_error 00:28:55.104 ************************************ 00:28:55.104 09:03:12 -- common/autotest_common.sh@1111 -- # run_digest_error 00:28:55.104 09:03:12 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:55.104 09:03:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:55.104 09:03:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:55.104 09:03:12 -- common/autotest_common.sh@10 -- # set +x 00:28:55.104 09:03:12 -- nvmf/common.sh@470 -- # nvmfpid=2223949 00:28:55.104 09:03:12 -- nvmf/common.sh@471 -- # waitforlisten 2223949 00:28:55.104 09:03:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:55.104 09:03:12 -- common/autotest_common.sh@817 -- # '[' -z 2223949 ']' 00:28:55.104 09:03:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.104 09:03:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:55.104 09:03:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.104 09:03:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:55.104 09:03:12 -- common/autotest_common.sh@10 -- # set +x 00:28:55.104 [2024-04-26 09:03:12.206906] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:28:55.104 [2024-04-26 09:03:12.206948] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.104 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.104 [2024-04-26 09:03:12.280578] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.361 [2024-04-26 09:03:12.351591] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.361 [2024-04-26 09:03:12.351626] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.361 [2024-04-26 09:03:12.351636] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.361 [2024-04-26 09:03:12.351644] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.361 [2024-04-26 09:03:12.351652] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.361 [2024-04-26 09:03:12.351672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.926 09:03:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:55.926 09:03:12 -- common/autotest_common.sh@850 -- # return 0 00:28:55.926 09:03:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:55.926 09:03:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:55.926 09:03:12 -- common/autotest_common.sh@10 -- # set +x 00:28:55.926 09:03:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:55.926 09:03:13 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:55.926 09:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.926 09:03:13 -- common/autotest_common.sh@10 -- # set +x 00:28:55.926 [2024-04-26 09:03:13.041694] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:55.926 09:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.926 09:03:13 -- host/digest.sh@105 -- # common_target_config 00:28:55.926 09:03:13 -- host/digest.sh@43 -- # rpc_cmd 00:28:55.926 09:03:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.926 09:03:13 -- common/autotest_common.sh@10 -- # set +x 00:28:55.926 null0 00:28:55.926 [2024-04-26 09:03:13.133515] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.926 [2024-04-26 09:03:13.157730] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.926 09:03:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.926 09:03:13 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:55.926 09:03:13 -- host/digest.sh@54 -- # local rw bs qd 00:28:55.926 09:03:13 -- host/digest.sh@56 -- # rw=randread 00:28:55.926 09:03:13 -- host/digest.sh@56 -- # bs=4096 00:28:55.926 09:03:13 -- host/digest.sh@56 -- # qd=128 00:28:55.926 09:03:13 -- host/digest.sh@58 -- # bperfpid=2224226 00:28:55.926 09:03:13 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:55.926 09:03:13 -- host/digest.sh@60 -- # waitforlisten 2224226 /var/tmp/bperf.sock 00:28:55.926 09:03:13 -- common/autotest_common.sh@817 -- # '[' -z 2224226 ']' 00:28:55.926 09:03:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:55.926 09:03:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:55.926 09:03:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:55.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:55.926 09:03:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:55.926 09:03:13 -- common/autotest_common.sh@10 -- # set +x 00:28:56.183 [2024-04-26 09:03:13.202383] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:28:56.183 [2024-04-26 09:03:13.202427] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224226 ] 00:28:56.183 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.183 [2024-04-26 09:03:13.270883] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.183 [2024-04-26 09:03:13.337578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.113 09:03:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:57.113 09:03:14 -- common/autotest_common.sh@850 -- # return 0 00:28:57.113 09:03:14 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.113 09:03:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.113 09:03:14 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:57.113 09:03:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.113 09:03:14 -- common/autotest_common.sh@10 -- # set +x 00:28:57.113 09:03:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.113 09:03:14 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.113 09:03:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.370 nvme0n1 00:28:57.370 09:03:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:57.370 09:03:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.370 09:03:14 -- common/autotest_common.sh@10 -- # set +x 00:28:57.370 09:03:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.370 09:03:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:57.370 09:03:14 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.627 Running I/O for 2 seconds... 00:28:57.627 [2024-04-26 09:03:14.665702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.665736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.665749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.678724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.678749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.678765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.688013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.688035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.688047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.696972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.696994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.697005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.706414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.706437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.706448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.714844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.714865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.714876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.725005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.725027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.725037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.734095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.734117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.734127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.742259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.742280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.742291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.752974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.752996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.753006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.760596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.760620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.760631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.771013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.771035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.627 [2024-04-26 09:03:14.771045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.627 [2024-04-26 09:03:14.779426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.627 [2024-04-26 09:03:14.779446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.628 [2024-04-26 09:03:14.779462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.628 [2024-04-26 09:03:14.788273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.628 [2024-04-26 09:03:14.788294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.628 [2024-04-26 09:03:14.788305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.628 [2024-04-26 09:03:14.798216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.628 [2024-04-26 09:03:14.798237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.628 [2024-04-26 09:03:14.798247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.628 [2024-04-26 09:03:14.806633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.628 [2024-04-26 09:03:14.806654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.628 [2024-04-26 09:03:14.806664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.628 [2024-04-26 09:03:14.815809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.628 [2024-04-26 09:03:14.815830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.628 [2024-04-26 09:03:14.815840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.628 [2024-04-26 09:03:14.824565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.628 [2024-04-26 09:03:14.824585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.628 [2024-04-26 09:03:14.824596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.628 [2024-04-26 09:03:14.833777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.628 [2024-04-26 09:03:14.833798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.628 [2024-04-26 09:03:14.833809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.628 [2024-04-26 09:03:14.844087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.628 [2024-04-26 09:03:14.844109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.628 [2024-04-26 09:03:14.844119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.628 [2024-04-26 09:03:14.852815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.628 [2024-04-26 09:03:14.852836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.628 [2024-04-26 09:03:14.852846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.628 [2024-04-26 09:03:14.861589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.628 [2024-04-26 09:03:14.861610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.628 [2024-04-26 09:03:14.861620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.628 [2024-04-26 09:03:14.871256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.628 [2024-04-26 09:03:14.871277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.628 [2024-04-26 09:03:14.871288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.879849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.879870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.879880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.888947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.888967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.888978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.897991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.898011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.898021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.908365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.908386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.908396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.916706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.916730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.916741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.926123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.926144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.926155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.935458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.935478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.935489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.943891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.943911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.943922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.953044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.953065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.953076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.963113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.963134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.963144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.972537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.972557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.972568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.981286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.885 [2024-04-26 09:03:14.981306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.885 [2024-04-26 09:03:14.981317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.885 [2024-04-26 09:03:14.990603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:14.990624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:14.990634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:14.999792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:14.999813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:14.999823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.009301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.009322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.009332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.018101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.018121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.018131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.027200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.027221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.027231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.035767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.035788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.035799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.045575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.045596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.045607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.054210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.054230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.054240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.063627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.063647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.063658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.072808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.072829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.072842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.081806] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.081827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.081837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.090657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.090679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.090689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.100045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.100066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.100076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.107991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.108011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.108022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.118115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.118136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.118146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.886 [2024-04-26 09:03:15.127281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:57.886 [2024-04-26 09:03:15.127303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.886 [2024-04-26 09:03:15.127313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.136329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.136349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.136359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.145715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.145736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.145747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.155210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.155234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.155245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.164933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.164953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.164963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.173955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.173977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.173988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.183647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.183668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.183679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.191884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.191905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.191915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.201097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.201118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.201129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.210854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.210875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.210886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.219887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.219908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.219918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.228164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.228184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.228195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.237644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.237665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.237676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.246732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.246753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.246763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.255293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.255314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.255324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.265422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.265443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.265458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.274457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.274478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.274489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.283811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.283832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.283842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.292498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.292518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.292529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.301567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.301588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.301599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.311272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.311293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.311307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.319891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.319912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.319922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.329771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.329792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.329802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.337885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.337905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.337916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.347958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.347979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.347990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.357719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.357740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.357750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.367130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.367151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.367162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.143 [2024-04-26 09:03:15.382583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.143 [2024-04-26 09:03:15.382613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.143 [2024-04-26 09:03:15.382624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.391850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.391871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.391882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.404166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.404187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.404198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.415190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.415212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.415222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.424621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.424642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.424653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.434798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.434819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.434830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.443396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.443417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.443428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.453840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.453861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.453871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.464394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.464415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.464425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.473385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.473406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.473417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.485795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.485815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.485829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.496667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.496687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.496697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.507473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.507493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.507503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.517843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.517864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.400 [2024-04-26 09:03:15.517875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.400 [2024-04-26 09:03:15.527265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.400 [2024-04-26 09:03:15.527285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.527295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.536270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.536291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.536301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.545259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.545279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.545288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.558486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.558508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.558518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.568463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.568483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.568493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.577573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.577598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.577608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.586847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.586868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.586879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.596326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.596348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.596359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.604607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.604628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.604639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.614914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.614936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.614947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.622824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.622845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.622862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.633061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.633081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.633092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.401 [2024-04-26 09:03:15.642520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.401 [2024-04-26 09:03:15.642541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.401 [2024-04-26 09:03:15.642552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.655036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.655056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.655067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.664093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.664114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.664125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.673085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.673105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.673116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.682466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.682486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.682497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.692246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.692267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.692277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.700383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.700403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.700414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.709630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.709651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.709662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.719824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.719845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.719855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.728147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.728169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.728179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.737915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.737936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.737950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.746639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.746660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.746670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.755766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.755787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.755797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.765211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.765233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.765244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.773988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.774010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.774021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.783231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.657 [2024-04-26 09:03:15.783252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.657 [2024-04-26 09:03:15.783263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.657 [2024-04-26 09:03:15.792840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.792862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.792872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.802161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.802182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.802193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.811466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.811487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.811497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.820714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.820736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.820746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.829250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.829271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.829282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.838175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.838196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.838207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.847333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.847355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.847366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.856409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.856430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.856440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.866082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.866103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.866113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.873874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.873895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.873905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.883978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.883999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.884009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.893844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.893866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.893880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.658 [2024-04-26 09:03:15.901759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.658 [2024-04-26 09:03:15.901780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.658 [2024-04-26 09:03:15.901791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.915 [2024-04-26 09:03:15.911574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.915 [2024-04-26 09:03:15.911595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.915 [2024-04-26 09:03:15.911605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.915 [2024-04-26 09:03:15.919858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.915 [2024-04-26 09:03:15.919879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.915 [2024-04-26 09:03:15.919889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.915 [2024-04-26 09:03:15.930016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.915 [2024-04-26 09:03:15.930037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.915 [2024-04-26 09:03:15.930048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.915 [2024-04-26 09:03:15.939100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.915 [2024-04-26 09:03:15.939121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.915 [2024-04-26 09:03:15.939131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.915 [2024-04-26 09:03:15.948126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.915 [2024-04-26 09:03:15.948147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.915 [2024-04-26 09:03:15.948157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.915 [2024-04-26 09:03:15.958639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.915 [2024-04-26 09:03:15.958660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:15.958670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:15.966730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:15.966751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:15.966761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:15.976339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:15.976364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:15.976374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:15.985653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:15.985674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:15.985685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:15.994347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:15.994368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:15.994378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.002742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.002763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.002773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.013768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.013789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.013799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.021165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.021186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.021196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.031351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.031372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.031383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.040319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.040339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.040350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.049292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.049312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.049323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.058813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.058833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.058844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.067132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.067153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.067164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.077280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.077300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.077311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.085100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.085121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.085131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.095203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.095223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.095234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.104556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.104577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.104587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.112603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.112623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.112634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.122954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.122974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.122985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.132010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.132031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.132044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.140835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.140855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.140865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.149749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.149769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.149779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.916 [2024-04-26 09:03:16.159511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:58.916 [2024-04-26 09:03:16.159533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.916 [2024-04-26 09:03:16.159543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.167587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.167608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.167618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.178093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.178113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.178123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.187443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.187468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.187480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.197087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.197107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.197118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.205244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.205265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.205275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.215226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.215247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.215258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.224374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.224395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.224406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.233150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.233171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.233182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.243582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.243602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.243613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.251847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.251867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.251878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.261252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.261273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.261283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.269586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.269606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.269617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.279674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.279694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.279704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.288245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.288266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.288280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.297918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.297939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.297950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.307954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.307975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.307986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.320940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.320961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.320971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.332463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.332484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.332494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.341430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.341456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.341467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.350523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.174 [2024-04-26 09:03:16.350543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.174 [2024-04-26 09:03:16.350553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.174 [2024-04-26 09:03:16.359064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.175 [2024-04-26 09:03:16.359085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.175 [2024-04-26 09:03:16.359095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.175 [2024-04-26 09:03:16.368943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.175 [2024-04-26 09:03:16.368963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.175 [2024-04-26 09:03:16.368973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.175 [2024-04-26 09:03:16.376887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.175 [2024-04-26 09:03:16.376912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.175 [2024-04-26 09:03:16.376923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.175 [2024-04-26 09:03:16.387121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.175 [2024-04-26 09:03:16.387143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.175 [2024-04-26 09:03:16.387154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.175 [2024-04-26 09:03:16.395588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.175 [2024-04-26 09:03:16.395611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.175 [2024-04-26 09:03:16.395622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.175 [2024-04-26 09:03:16.405038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.175 [2024-04-26 09:03:16.405059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.175 [2024-04-26 09:03:16.405069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.175 [2024-04-26 09:03:16.415307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.175 [2024-04-26 09:03:16.415328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.175 [2024-04-26 09:03:16.415338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.422992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.423012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.423023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.433226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.433248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.433258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.442149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.442170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.442181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.451329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.451350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.451360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.460170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.460191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.460201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.468988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.469009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.469019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.479048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.479069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.479079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.487325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.487346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.487356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.497477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.497498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.497508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.506197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.506218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.506229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.515277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.515298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.515308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.524353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.524374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.524384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.533844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.533864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.533878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.542608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.542629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.542639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.552407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.552428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.552438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.560548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.560569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.560580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.570635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.570655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.570666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.579664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.579685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.579695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.588319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.588340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.588350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.597565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.597586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.597596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.607607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.607628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.607638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.616204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.616228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.616238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.624929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.624949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.624960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 [2024-04-26 09:03:16.634190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x131b960) 00:28:59.433 [2024-04-26 09:03:16.634210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.433 [2024-04-26 09:03:16.634220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.433 00:28:59.433 Latency(us) 00:28:59.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.433 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:59.433 nvme0n1 : 2.00 26931.55 105.20 0.00 0.00 4747.95 2359.30 28521.27 00:28:59.433 =================================================================================================================== 00:28:59.433 Total : 26931.55 105.20 0.00 0.00 4747.95 2359.30 28521.27 00:28:59.433 0 00:28:59.433 09:03:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:59.433 09:03:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:59.433 09:03:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:59.433 | .driver_specific 00:28:59.433 | .nvme_error 00:28:59.433 | .status_code 00:28:59.433 | .command_transient_transport_error' 00:28:59.434 09:03:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:59.691 09:03:16 -- host/digest.sh@71 -- # (( 211 > 0 )) 00:28:59.691 09:03:16 -- host/digest.sh@73 -- # killprocess 2224226 00:28:59.691 09:03:16 -- common/autotest_common.sh@936 -- # '[' -z 2224226 ']' 00:28:59.691 09:03:16 -- common/autotest_common.sh@940 -- # kill -0 2224226 00:28:59.691 09:03:16 -- common/autotest_common.sh@941 -- # uname 00:28:59.691 09:03:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:59.691 09:03:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2224226 00:28:59.691 09:03:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:59.691 09:03:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:59.691 09:03:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2224226' 00:28:59.691 killing process with pid 2224226 00:28:59.691 09:03:16 -- common/autotest_common.sh@955 -- # kill 2224226 00:28:59.691 Received shutdown signal, test time was about 2.000000 seconds 00:28:59.691 00:28:59.691 Latency(us) 00:28:59.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.691 =================================================================================================================== 00:28:59.691 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.691 09:03:16 -- common/autotest_common.sh@960 -- # wait 2224226 00:28:59.973 09:03:17 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:59.973 09:03:17 -- host/digest.sh@54 -- # local rw bs qd 00:28:59.973 09:03:17 -- host/digest.sh@56 -- # rw=randread 00:28:59.973 09:03:17 -- host/digest.sh@56 -- # bs=131072 00:28:59.973 09:03:17 -- host/digest.sh@56 -- # qd=16 00:28:59.973 09:03:17 -- host/digest.sh@58 -- # bperfpid=2224783 00:28:59.973 09:03:17 -- host/digest.sh@60 -- # waitforlisten 2224783 /var/tmp/bperf.sock 00:28:59.973 09:03:17 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:59.973 09:03:17 -- common/autotest_common.sh@817 -- # '[' -z 2224783 ']' 00:28:59.973 09:03:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:59.973 09:03:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:59.973 09:03:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:59.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:59.973 09:03:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:59.973 09:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:59.973 [2024-04-26 09:03:17.145595] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:28:59.973 [2024-04-26 09:03:17.145644] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224783 ] 00:28:59.973 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:59.973 Zero copy mechanism will not be used. 00:28:59.973 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.973 [2024-04-26 09:03:17.215226] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.230 [2024-04-26 09:03:17.288176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.793 09:03:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:00.793 09:03:17 -- common/autotest_common.sh@850 -- # return 0 00:29:00.793 09:03:17 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:00.793 09:03:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:01.049 09:03:18 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:01.049 09:03:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.049 09:03:18 -- common/autotest_common.sh@10 -- # set +x 00:29:01.049 09:03:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.049 09:03:18 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.049 09:03:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.306 nvme0n1 00:29:01.306 09:03:18 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:01.306 09:03:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:01.306 09:03:18 -- common/autotest_common.sh@10 -- # set +x 00:29:01.306 09:03:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:01.306 09:03:18 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:01.306 09:03:18 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.306 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.306 Zero copy mechanism will not be used. 00:29:01.306 Running I/O for 2 seconds... 00:29:01.563 [2024-04-26 09:03:18.572477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.563 [2024-04-26 09:03:18.572510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-04-26 09:03:18.572523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.563 [2024-04-26 09:03:18.586671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.563 [2024-04-26 09:03:18.586695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-04-26 09:03:18.586706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.563 [2024-04-26 09:03:18.598752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.563 [2024-04-26 09:03:18.598774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-04-26 09:03:18.598784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.563 [2024-04-26 09:03:18.610717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.563 [2024-04-26 09:03:18.610738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-04-26 09:03:18.610748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.563 [2024-04-26 09:03:18.622707] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.563 [2024-04-26 09:03:18.622729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-04-26 09:03:18.622739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.563 [2024-04-26 09:03:18.634696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.563 [2024-04-26 09:03:18.634718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-04-26 09:03:18.634728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.563 [2024-04-26 09:03:18.646658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.563 [2024-04-26 09:03:18.646679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-04-26 09:03:18.646690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.563 [2024-04-26 09:03:18.658665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.563 [2024-04-26 09:03:18.658686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-04-26 09:03:18.658697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.563 [2024-04-26 09:03:18.670717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.563 [2024-04-26 09:03:18.670738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.563 [2024-04-26 09:03:18.670749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.563 [2024-04-26 09:03:18.682727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.564 [2024-04-26 09:03:18.682748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.564 [2024-04-26 09:03:18.682758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.564 [2024-04-26 09:03:18.694764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.564 [2024-04-26 09:03:18.694788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.564 [2024-04-26 09:03:18.694799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.564 [2024-04-26 09:03:18.706724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.564 [2024-04-26 09:03:18.706746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.564 [2024-04-26 09:03:18.706756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.564 [2024-04-26 09:03:18.718694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.564 [2024-04-26 09:03:18.718715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.564 [2024-04-26 09:03:18.718726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.564 [2024-04-26 09:03:18.730708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.564 [2024-04-26 09:03:18.730729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.564 [2024-04-26 09:03:18.730739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.564 [2024-04-26 09:03:18.742679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.564 [2024-04-26 09:03:18.742700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.564 [2024-04-26 09:03:18.742710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.564 [2024-04-26 09:03:18.754632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.564 [2024-04-26 09:03:18.754653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.564 [2024-04-26 09:03:18.754663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.564 [2024-04-26 09:03:18.766586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.564 [2024-04-26 09:03:18.766607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.564 [2024-04-26 09:03:18.766617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.564 [2024-04-26 09:03:18.778516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.564 [2024-04-26 09:03:18.778537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.564 [2024-04-26 09:03:18.778547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.564 [2024-04-26 09:03:18.790503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.564 [2024-04-26 09:03:18.790524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.564 [2024-04-26 09:03:18.790535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.564 [2024-04-26 09:03:18.802534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.564 [2024-04-26 09:03:18.802555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.564 [2024-04-26 09:03:18.802566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.814560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.814582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.814593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.826565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.826587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.826597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.838547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.838567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.838577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.850536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.850557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.850567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.862490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.862510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.862521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.874490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.874511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.874521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.886435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.886460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.886471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.898356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.898378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.898445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.910309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.910331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.910341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.922458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.922479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.922489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.934402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.934423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.934433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.946416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.946437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.946447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.823 [2024-04-26 09:03:18.958474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.823 [2024-04-26 09:03:18.958496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.823 [2024-04-26 09:03:18.958506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.824 [2024-04-26 09:03:18.970391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.824 [2024-04-26 09:03:18.970413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-04-26 09:03:18.970423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.824 [2024-04-26 09:03:18.982464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.824 [2024-04-26 09:03:18.982486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-04-26 09:03:18.982496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.824 [2024-04-26 09:03:18.994443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.824 [2024-04-26 09:03:18.994470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-04-26 09:03:18.994480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.824 [2024-04-26 09:03:19.006416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.824 [2024-04-26 09:03:19.006440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-04-26 09:03:19.006456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.824 [2024-04-26 09:03:19.018378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.824 [2024-04-26 09:03:19.018400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-04-26 09:03:19.018410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.824 [2024-04-26 09:03:19.030364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.824 [2024-04-26 09:03:19.030384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-04-26 09:03:19.030394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.824 [2024-04-26 09:03:19.042320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.824 [2024-04-26 09:03:19.042340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-04-26 09:03:19.042351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.824 [2024-04-26 09:03:19.054278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.824 [2024-04-26 09:03:19.054299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-04-26 09:03:19.054309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.824 [2024-04-26 09:03:19.066482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:01.824 [2024-04-26 09:03:19.066504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.824 [2024-04-26 09:03:19.066514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.089 [2024-04-26 09:03:19.078653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.089 [2024-04-26 09:03:19.078678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.089 [2024-04-26 09:03:19.078689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.089 [2024-04-26 09:03:19.090935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.089 [2024-04-26 09:03:19.090959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.089 [2024-04-26 09:03:19.090970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.089 [2024-04-26 09:03:19.103000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.089 [2024-04-26 09:03:19.103023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.089 [2024-04-26 09:03:19.103034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.089 [2024-04-26 09:03:19.115035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.089 [2024-04-26 09:03:19.115057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.089 [2024-04-26 09:03:19.115068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.089 [2024-04-26 09:03:19.127138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.089 [2024-04-26 09:03:19.127159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.127170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.139140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.139161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.139171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.151176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.151197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.151207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.163190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.163211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.163222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.175252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.175273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.175284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.187329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.187349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.187359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.199368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.199389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.199399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.211436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.211464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.211478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.223531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.223552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.223562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.235593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.235615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.235625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.247715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.247736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.247746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.259778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.259799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.259810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.271885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.271906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.271916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.283837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.283858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.283868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.295777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.295798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.295808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.307794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.307815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.307825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.319722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.319743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.319754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.090 [2024-04-26 09:03:19.331750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.090 [2024-04-26 09:03:19.331772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.090 [2024-04-26 09:03:19.331782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.343752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.343773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.343784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.355848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.355870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.355881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.367861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.367883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.367894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.379870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.379892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.379902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.391842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.391863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.391873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.403784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.403804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.403815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.415811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.415833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.415846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.427763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.427783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.427794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.439977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.439998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.440008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.452040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.452061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.452071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.464067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.464089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.464099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.476149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.476170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.476180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.488168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.488189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.488200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.500209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.500230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.500240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.512299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.512320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.512330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.524357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.524381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.524391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.536416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.347 [2024-04-26 09:03:19.536437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.347 [2024-04-26 09:03:19.536448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.347 [2024-04-26 09:03:19.548584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.348 [2024-04-26 09:03:19.548606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.348 [2024-04-26 09:03:19.548616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.348 [2024-04-26 09:03:19.560609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.348 [2024-04-26 09:03:19.560630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.348 [2024-04-26 09:03:19.560640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.348 [2024-04-26 09:03:19.572641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.348 [2024-04-26 09:03:19.572661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.348 [2024-04-26 09:03:19.572672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.348 [2024-04-26 09:03:19.584857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.348 [2024-04-26 09:03:19.584878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.348 [2024-04-26 09:03:19.584889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.596859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.596880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.596891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.608847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.608869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.608879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.620815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.620836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.620846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.632834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.632855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.632865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.644872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.644893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.644903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.656971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.656992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.657002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.669236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.669257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.669267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.681367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.681388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.681398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.693391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.693412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.693422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.705496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.705517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.705527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.717524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.717545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.717555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.729796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.729817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.729832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.741768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.741789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.741800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.753737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.753758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.753768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.765709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.765731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.765741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.778219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.778240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.778250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.796005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.796026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.796036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.605 [2024-04-26 09:03:19.812083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.605 [2024-04-26 09:03:19.812105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.605 [2024-04-26 09:03:19.812115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.606 [2024-04-26 09:03:19.825433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.606 [2024-04-26 09:03:19.825461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.606 [2024-04-26 09:03:19.825472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.606 [2024-04-26 09:03:19.841415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.606 [2024-04-26 09:03:19.841437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.606 [2024-04-26 09:03:19.841447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:19.855013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:19.855039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:19.855050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:19.869269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:19.869290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:19.869300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:19.882734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:19.882755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:19.882766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:19.897603] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:19.897626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:19.897637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:19.910737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:19.910758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:19.910769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:19.924292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:19.924313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:19.924324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:19.936590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:19.936611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:19.936621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:19.958105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:19.958126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:19.958136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:19.972442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:19.972470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:19.972484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:19.987326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:19.987349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:19.987360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:20.010538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:20.010560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:20.010571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:20.025419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:20.025441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:20.025457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:20.039368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:20.039391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:20.039401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:20.051561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:20.051583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:20.051593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:20.063629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:20.063650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:20.063660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:20.075747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:20.075768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:20.075779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:20.087901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:20.087922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:20.087933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.864 [2024-04-26 09:03:20.100276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:02.864 [2024-04-26 09:03:20.100301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.864 [2024-04-26 09:03:20.100311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.122 [2024-04-26 09:03:20.114421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.122 [2024-04-26 09:03:20.114443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.122 [2024-04-26 09:03:20.114460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.122 [2024-04-26 09:03:20.127154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.122 [2024-04-26 09:03:20.127176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.122 [2024-04-26 09:03:20.127186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.122 [2024-04-26 09:03:20.140384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.122 [2024-04-26 09:03:20.140405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.122 [2024-04-26 09:03:20.140416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.122 [2024-04-26 09:03:20.153345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.122 [2024-04-26 09:03:20.153367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.122 [2024-04-26 09:03:20.153377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.122 [2024-04-26 09:03:20.165555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.122 [2024-04-26 09:03:20.165577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.165587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.177727] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.177749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.177760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.190107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.190127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.190138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.202301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.202322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.202333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.215956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.215978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.215989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.227626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.227648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.227659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.239831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.239852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.239862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.251894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.251916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.251926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.263993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.264014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.264025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.283683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.283705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.283715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.300623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.300645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.300655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.323002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.323023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.323034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.344463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.344484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.344497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.123 [2024-04-26 09:03:20.361984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.123 [2024-04-26 09:03:20.362005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.123 [2024-04-26 09:03:20.362015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.374777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.374799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.374809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.387572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.387603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.387630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.399814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.399836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.399846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.412637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.412659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.412669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.434392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.434413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.434424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.455048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.455069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.455080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.473127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.473148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.473159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.493119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.493144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.493155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.506667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.506689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.506700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.518838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.518859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.518870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.530969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.530990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.531000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.382 [2024-04-26 09:03:20.542976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x988950) 00:29:03.382 [2024-04-26 09:03:20.542999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.382 [2024-04-26 09:03:20.543009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.382 00:29:03.382 Latency(us) 00:29:03.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.382 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:03.382 nvme0n1 : 2.01 2365.97 295.75 0.00 0.00 6758.07 5924.45 28730.98 00:29:03.382 =================================================================================================================== 00:29:03.382 Total : 2365.97 295.75 0.00 0.00 6758.07 5924.45 28730.98 00:29:03.382 0 00:29:03.382 09:03:20 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:03.382 09:03:20 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:03.382 09:03:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:03.382 09:03:20 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:03.382 | .driver_specific 00:29:03.382 | .nvme_error 00:29:03.382 | .status_code 00:29:03.382 | .command_transient_transport_error' 00:29:03.640 09:03:20 -- host/digest.sh@71 -- # (( 153 > 0 )) 00:29:03.640 09:03:20 -- host/digest.sh@73 -- # killprocess 2224783 00:29:03.640 09:03:20 -- common/autotest_common.sh@936 -- # '[' -z 2224783 ']' 00:29:03.640 09:03:20 -- common/autotest_common.sh@940 -- # kill -0 2224783 00:29:03.640 09:03:20 -- common/autotest_common.sh@941 -- # uname 00:29:03.640 09:03:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:03.640 09:03:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2224783 00:29:03.640 09:03:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:03.640 09:03:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:03.640 09:03:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2224783' 00:29:03.640 killing process with pid 2224783 00:29:03.640 09:03:20 -- common/autotest_common.sh@955 -- # kill 2224783 00:29:03.640 Received shutdown signal, test time was about 2.000000 seconds 00:29:03.640 00:29:03.640 Latency(us) 00:29:03.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.640 =================================================================================================================== 00:29:03.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.640 09:03:20 -- common/autotest_common.sh@960 -- # wait 2224783 00:29:03.899 09:03:20 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:03.899 09:03:20 -- host/digest.sh@54 -- # local rw bs qd 00:29:03.899 09:03:20 -- host/digest.sh@56 -- # rw=randwrite 00:29:03.899 09:03:20 -- host/digest.sh@56 -- # bs=4096 00:29:03.899 09:03:20 -- host/digest.sh@56 -- # qd=128 00:29:03.899 09:03:20 -- host/digest.sh@58 -- # bperfpid=2225538 00:29:03.899 09:03:20 -- host/digest.sh@60 -- # waitforlisten 2225538 /var/tmp/bperf.sock 00:29:03.899 09:03:20 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:03.899 09:03:20 -- common/autotest_common.sh@817 -- # '[' -z 2225538 ']' 00:29:03.899 09:03:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:03.899 09:03:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:03.899 09:03:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:03.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:03.899 09:03:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:03.899 09:03:20 -- common/autotest_common.sh@10 -- # set +x 00:29:03.899 [2024-04-26 09:03:21.042223] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:29:03.899 [2024-04-26 09:03:21.042279] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225538 ] 00:29:03.899 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.899 [2024-04-26 09:03:21.111878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.157 [2024-04-26 09:03:21.177012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.721 09:03:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:04.721 09:03:21 -- common/autotest_common.sh@850 -- # return 0 00:29:04.721 09:03:21 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:04.721 09:03:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:04.978 09:03:22 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:04.978 09:03:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:04.978 09:03:22 -- common/autotest_common.sh@10 -- # set +x 00:29:04.978 09:03:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:04.978 09:03:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.978 09:03:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.235 nvme0n1 00:29:05.235 09:03:22 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:05.235 09:03:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:05.235 09:03:22 -- common/autotest_common.sh@10 -- # set +x 00:29:05.235 09:03:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:05.235 09:03:22 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:05.235 09:03:22 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.493 Running I/O for 2 seconds... 00:29:05.493 [2024-04-26 09:03:22.547325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190fdeb0 00:29:05.493 [2024-04-26 09:03:22.548329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.493 [2024-04-26 09:03:22.548361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:05.493 [2024-04-26 09:03:22.557732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.493 [2024-04-26 09:03:22.557935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.493 [2024-04-26 09:03:22.557959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.493 [2024-04-26 09:03:22.567251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.493 [2024-04-26 09:03:22.567457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.493 [2024-04-26 09:03:22.567494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.493 [2024-04-26 09:03:22.576689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.493 [2024-04-26 09:03:22.576907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.493 [2024-04-26 09:03:22.576927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.493 [2024-04-26 09:03:22.586092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.493 [2024-04-26 09:03:22.586314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.493 [2024-04-26 09:03:22.586333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.493 [2024-04-26 09:03:22.595471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.493 [2024-04-26 09:03:22.595711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.493 [2024-04-26 09:03:22.595731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.493 [2024-04-26 09:03:22.604894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.493 [2024-04-26 09:03:22.605113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.493 [2024-04-26 09:03:22.605133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.493 [2024-04-26 09:03:22.614282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.493 [2024-04-26 09:03:22.614515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.493 [2024-04-26 09:03:22.614534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.493 [2024-04-26 09:03:22.623703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.493 [2024-04-26 09:03:22.623935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.493 [2024-04-26 09:03:22.623954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.493 [2024-04-26 09:03:22.633070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.493 [2024-04-26 09:03:22.633286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.633305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.494 [2024-04-26 09:03:22.642429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.494 [2024-04-26 09:03:22.642651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.642671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.494 [2024-04-26 09:03:22.651807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.494 [2024-04-26 09:03:22.652040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.652060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.494 [2024-04-26 09:03:22.661164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.494 [2024-04-26 09:03:22.661381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.661401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.494 [2024-04-26 09:03:22.670560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.494 [2024-04-26 09:03:22.670779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.670798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.494 [2024-04-26 09:03:22.679917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.494 [2024-04-26 09:03:22.680136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.680154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.494 [2024-04-26 09:03:22.689279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.494 [2024-04-26 09:03:22.689516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.689535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.494 [2024-04-26 09:03:22.698678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.494 [2024-04-26 09:03:22.698883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.698902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.494 [2024-04-26 09:03:22.707982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.494 [2024-04-26 09:03:22.708367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.708390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.494 [2024-04-26 09:03:22.717305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f57b0 00:29:05.494 [2024-04-26 09:03:22.718928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.718947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.494 [2024-04-26 09:03:22.728407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f4b08 00:29:05.494 [2024-04-26 09:03:22.729300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.729321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:05.494 [2024-04-26 09:03:22.737852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f4b08 00:29:05.494 [2024-04-26 09:03:22.738342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.494 [2024-04-26 09:03:22.738370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:05.752 [2024-04-26 09:03:22.747400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f4b08 00:29:05.752 [2024-04-26 09:03:22.748052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.752 [2024-04-26 09:03:22.748076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:05.752 [2024-04-26 09:03:22.756768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f4b08 00:29:05.752 [2024-04-26 09:03:22.757197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.752 [2024-04-26 09:03:22.757218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:05.752 [2024-04-26 09:03:22.766087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f4b08 00:29:05.752 [2024-04-26 09:03:22.766572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.752 [2024-04-26 09:03:22.766592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:05.752 [2024-04-26 09:03:22.775525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f4b08 00:29:05.752 [2024-04-26 09:03:22.775907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.752 [2024-04-26 09:03:22.775926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:05.752 [2024-04-26 09:03:22.786614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f7da8 00:29:05.752 [2024-04-26 09:03:22.787967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.752 [2024-04-26 09:03:22.787986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:05.752 [2024-04-26 09:03:22.796736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190fd640 00:29:05.753 [2024-04-26 09:03:22.797664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.797683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.805772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190eaab8 00:29:05.753 [2024-04-26 09:03:22.807004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.807023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.817720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f4b08 00:29:05.753 [2024-04-26 09:03:22.818970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.818989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.828421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2d80 00:29:05.753 [2024-04-26 09:03:22.829562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.829581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.837331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f81e0 00:29:05.753 [2024-04-26 09:03:22.838265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.838285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.846298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f8a50 00:29:05.753 [2024-04-26 09:03:22.847061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.847080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.855228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2d80 00:29:05.753 [2024-04-26 09:03:22.856018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.856038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.864175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f81e0 00:29:05.753 [2024-04-26 09:03:22.865373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.865392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.873210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f8a50 00:29:05.753 [2024-04-26 09:03:22.874024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.874044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.882204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2d80 00:29:05.753 [2024-04-26 09:03:22.883118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.883137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.891145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f81e0 00:29:05.753 [2024-04-26 09:03:22.892507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.892526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.899773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190eaef0 00:29:05.753 [2024-04-26 09:03:22.901681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.901701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.914004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190fcdd0 00:29:05.753 [2024-04-26 09:03:22.915612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.915632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.923952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190eaab8 00:29:05.753 [2024-04-26 09:03:22.924187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.924206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.934364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190ec408 00:29:05.753 [2024-04-26 09:03:22.935831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.935850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.944400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:05.753 [2024-04-26 09:03:22.944943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.944963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.953828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:05.753 [2024-04-26 09:03:22.954081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.954101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.963212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:05.753 [2024-04-26 09:03:22.963447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.963473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.972543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:05.753 [2024-04-26 09:03:22.972790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.972809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.981931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:05.753 [2024-04-26 09:03:22.982194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.982213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:05.753 [2024-04-26 09:03:22.991293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:05.753 [2024-04-26 09:03:22.991561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.753 [2024-04-26 09:03:22.991581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.000771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.001040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.001064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.010224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.010494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.010517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.019587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.019832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.019852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.028903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.029149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.029169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.038316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.038587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.038617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.047660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.047910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.047930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.057199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.057454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.057474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.066527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.066801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.066821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.075865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.076111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.076130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.085220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.085466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.085502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.094630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.094880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.094899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.103968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.104218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.104237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.113312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.113563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.113582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.122698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.122943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.122962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.132038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.132284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.132304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.141353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.141624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.141644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.150730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.150975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.150994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.160068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.160318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.012 [2024-04-26 09:03:23.160337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.012 [2024-04-26 09:03:23.169395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.012 [2024-04-26 09:03:23.169665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.013 [2024-04-26 09:03:23.169684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.013 [2024-04-26 09:03:23.178763] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.013 [2024-04-26 09:03:23.179010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.013 [2024-04-26 09:03:23.179029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.013 [2024-04-26 09:03:23.188148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.013 [2024-04-26 09:03:23.188400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.013 [2024-04-26 09:03:23.188419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.013 [2024-04-26 09:03:23.197324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.013 [2024-04-26 09:03:23.197582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.013 [2024-04-26 09:03:23.197601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.013 [2024-04-26 09:03:23.206698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.013 [2024-04-26 09:03:23.206961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.013 [2024-04-26 09:03:23.206983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.013 [2024-04-26 09:03:23.216055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.013 [2024-04-26 09:03:23.216300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.013 [2024-04-26 09:03:23.216319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.013 [2024-04-26 09:03:23.225346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.013 [2024-04-26 09:03:23.225715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.013 [2024-04-26 09:03:23.225734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.013 [2024-04-26 09:03:23.234810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.013 [2024-04-26 09:03:23.235082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.013 [2024-04-26 09:03:23.235101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.013 [2024-04-26 09:03:23.244230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.013 [2024-04-26 09:03:23.244480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.013 [2024-04-26 09:03:23.244499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.013 [2024-04-26 09:03:23.253536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.013 [2024-04-26 09:03:23.253781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.013 [2024-04-26 09:03:23.253800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.282 [2024-04-26 09:03:23.263059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.282 [2024-04-26 09:03:23.263325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.282 [2024-04-26 09:03:23.263349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.282 [2024-04-26 09:03:23.272429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.282 [2024-04-26 09:03:23.272699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.282 [2024-04-26 09:03:23.272720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.283 [2024-04-26 09:03:23.281838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.283 [2024-04-26 09:03:23.282083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.283 [2024-04-26 09:03:23.282103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.283 [2024-04-26 09:03:23.291129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.283 [2024-04-26 09:03:23.291384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.283 [2024-04-26 09:03:23.291404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.283 [2024-04-26 09:03:23.300538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.283 [2024-04-26 09:03:23.300807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.283 [2024-04-26 09:03:23.300826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.283 [2024-04-26 09:03:23.310108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.283 [2024-04-26 09:03:23.310357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.283 [2024-04-26 09:03:23.310377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.283 [2024-04-26 09:03:23.319436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.283 [2024-04-26 09:03:23.319693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.283 [2024-04-26 09:03:23.319713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.283 [2024-04-26 09:03:23.328784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.284 [2024-04-26 09:03:23.329052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.284 [2024-04-26 09:03:23.329071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.284 [2024-04-26 09:03:23.338174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.284 [2024-04-26 09:03:23.338423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.284 [2024-04-26 09:03:23.338442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.284 [2024-04-26 09:03:23.347563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.284 [2024-04-26 09:03:23.347808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.284 [2024-04-26 09:03:23.347828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.284 [2024-04-26 09:03:23.356921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.284 [2024-04-26 09:03:23.357185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.284 [2024-04-26 09:03:23.357205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.284 [2024-04-26 09:03:23.366299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.284 [2024-04-26 09:03:23.366543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.284 [2024-04-26 09:03:23.366562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.285 [2024-04-26 09:03:23.375588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.285 [2024-04-26 09:03:23.375835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.285 [2024-04-26 09:03:23.375855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.285 [2024-04-26 09:03:23.384968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.285 [2024-04-26 09:03:23.385234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.285 [2024-04-26 09:03:23.385253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.285 [2024-04-26 09:03:23.394315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.285 [2024-04-26 09:03:23.394564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.285 [2024-04-26 09:03:23.394583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.285 [2024-04-26 09:03:23.403637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.285 [2024-04-26 09:03:23.403882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.285 [2024-04-26 09:03:23.403901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.285 [2024-04-26 09:03:23.413000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.285 [2024-04-26 09:03:23.413267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.286 [2024-04-26 09:03:23.413288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.286 [2024-04-26 09:03:23.422355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.286 [2024-04-26 09:03:23.422606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.286 [2024-04-26 09:03:23.422626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.286 [2024-04-26 09:03:23.431664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.286 [2024-04-26 09:03:23.431910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.286 [2024-04-26 09:03:23.431929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.286 [2024-04-26 09:03:23.441027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.286 [2024-04-26 09:03:23.441295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.286 [2024-04-26 09:03:23.441314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.286 [2024-04-26 09:03:23.450375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.287 [2024-04-26 09:03:23.450625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-04-26 09:03:23.450648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.287 [2024-04-26 09:03:23.459599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.287 [2024-04-26 09:03:23.459844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-04-26 09:03:23.459863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.287 [2024-04-26 09:03:23.468877] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.287 [2024-04-26 09:03:23.469143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-04-26 09:03:23.469162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.287 [2024-04-26 09:03:23.478265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.287 [2024-04-26 09:03:23.478532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.287 [2024-04-26 09:03:23.478552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.287 [2024-04-26 09:03:23.487601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.287 [2024-04-26 09:03:23.487850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.288 [2024-04-26 09:03:23.487868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.288 [2024-04-26 09:03:23.496965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.288 [2024-04-26 09:03:23.497230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.288 [2024-04-26 09:03:23.497249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.288 [2024-04-26 09:03:23.506317] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.288 [2024-04-26 09:03:23.506568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.288 [2024-04-26 09:03:23.506587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.288 [2024-04-26 09:03:23.515641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.288 [2024-04-26 09:03:23.515906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.288 [2024-04-26 09:03:23.515928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.288 [2024-04-26 09:03:23.525068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.288 [2024-04-26 09:03:23.525334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.288 [2024-04-26 09:03:23.525357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.534584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.534855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.534879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.544063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.544311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.544333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.553505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.553759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.553780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.563092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.563342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.563362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.572470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.572740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.572760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.581852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.582100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.582122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.591171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.591415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.591434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.600545] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.600810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.600829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.609914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.610158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.610177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.619245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.619495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.619515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.628550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.628799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.628818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.637929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.638193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.638213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.647285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.647535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.647555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.656620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.656869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.656888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.666023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.666291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.666311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.675356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.675611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.675630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.684715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.684964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.684983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.694087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.694336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.694359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.703504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.703782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.703801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.712885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.713151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.713171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.722275] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.722537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.722557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.731667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.731914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.731934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.741079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.741344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.741364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.750437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.750712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.750732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.557 [2024-04-26 09:03:23.760046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.557 [2024-04-26 09:03:23.760287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.557 [2024-04-26 09:03:23.760306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.558 [2024-04-26 09:03:23.769648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.558 [2024-04-26 09:03:23.769897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.558 [2024-04-26 09:03:23.769917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.558 [2024-04-26 09:03:23.779230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.558 [2024-04-26 09:03:23.779490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.558 [2024-04-26 09:03:23.779510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.558 [2024-04-26 09:03:23.788812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.558 [2024-04-26 09:03:23.789066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.558 [2024-04-26 09:03:23.789085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.558 [2024-04-26 09:03:23.798394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.558 [2024-04-26 09:03:23.798659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.558 [2024-04-26 09:03:23.798678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.808471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.808731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.808755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.818248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.818506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.818527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.828255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.828520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.828540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.838395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.838672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.838692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.848513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.848781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.848802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.858734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.859003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.859024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.868623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.868885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.868905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.878257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.878504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.878524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.887851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.888090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.888109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.897462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.897721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.897742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.906835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.907088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.907107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.916218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.916468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.916488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.925564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.925810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.925829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.934884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.935129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.935148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.944267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.944516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.944539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.953661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.953913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.953934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.963084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.963332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.963351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.972435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.972691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.816 [2024-04-26 09:03:23.972710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.816 [2024-04-26 09:03:23.981756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.816 [2024-04-26 09:03:23.982001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.817 [2024-04-26 09:03:23.982020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.817 [2024-04-26 09:03:23.991095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.817 [2024-04-26 09:03:23.991341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.817 [2024-04-26 09:03:23.991360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.817 [2024-04-26 09:03:24.000410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.817 [2024-04-26 09:03:24.000683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.817 [2024-04-26 09:03:24.000703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.817 [2024-04-26 09:03:24.009809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.817 [2024-04-26 09:03:24.010057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.817 [2024-04-26 09:03:24.010077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.817 [2024-04-26 09:03:24.019196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.817 [2024-04-26 09:03:24.019445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.817 [2024-04-26 09:03:24.019470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.817 [2024-04-26 09:03:24.028541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.817 [2024-04-26 09:03:24.028796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.817 [2024-04-26 09:03:24.028816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.817 [2024-04-26 09:03:24.037869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.817 [2024-04-26 09:03:24.038114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.817 [2024-04-26 09:03:24.038133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.817 [2024-04-26 09:03:24.047226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.817 [2024-04-26 09:03:24.047477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.817 [2024-04-26 09:03:24.047498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.817 [2024-04-26 09:03:24.056524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:06.817 [2024-04-26 09:03:24.056771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.817 [2024-04-26 09:03:24.056790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.075 [2024-04-26 09:03:24.066017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.075 [2024-04-26 09:03:24.066266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.075 [2024-04-26 09:03:24.066290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.075 [2024-04-26 09:03:24.075538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.075 [2024-04-26 09:03:24.075788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.075811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.084869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.085115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.085135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.094205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.094456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.094476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.103544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.103793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.103812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.112892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.113134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.113153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.122246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.122496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.122516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.131609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.131860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.131880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.140945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.141188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.141208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.150306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.150550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.150570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.159656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.159903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.159922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.168986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.169232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.169252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.178307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.178555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.178574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.187642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.187894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.187917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.197005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.197250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.197270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.206344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.206600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.206620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.215702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.215948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.215968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.224982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.225227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.225247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.234425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.234679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.234699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.243799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.244041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.244060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.253120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.253368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.253388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.262416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.262898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.262918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.271794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f3e60 00:29:07.076 [2024-04-26 09:03:24.272407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.272426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.283434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2d80 00:29:07.076 [2024-04-26 09:03:24.284644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.284664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.292733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2948 00:29:07.076 [2024-04-26 09:03:24.293770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.293789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.302280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f96f8 00:29:07.076 [2024-04-26 09:03:24.302470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.302489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:07.076 [2024-04-26 09:03:24.311567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f96f8 00:29:07.076 [2024-04-26 09:03:24.312041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.076 [2024-04-26 09:03:24.312060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:07.334 [2024-04-26 09:03:24.322599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f5378 00:29:07.334 [2024-04-26 09:03:24.323842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.334 [2024-04-26 09:03:24.323866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.332184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f9f68 00:29:07.335 [2024-04-26 09:03:24.333631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.333654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.341965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2510 00:29:07.335 [2024-04-26 09:03:24.342181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.342201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.351322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2510 00:29:07.335 [2024-04-26 09:03:24.351879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.351898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.363764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f35f0 00:29:07.335 [2024-04-26 09:03:24.364810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.364830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.375501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f4298 00:29:07.335 [2024-04-26 09:03:24.376207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.376227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.384292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190eaef0 00:29:07.335 [2024-04-26 09:03:24.386180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.386199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.397077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190fe2e8 00:29:07.335 [2024-04-26 09:03:24.398340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.398360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.407151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2948 00:29:07.335 [2024-04-26 09:03:24.407526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.407546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.416471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2948 00:29:07.335 [2024-04-26 09:03:24.416757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.416776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.425806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2948 00:29:07.335 [2024-04-26 09:03:24.426021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.426040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.435377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2948 00:29:07.335 [2024-04-26 09:03:24.435977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.435997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.446982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190f2948 00:29:07.335 [2024-04-26 09:03:24.448438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.448462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.457508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190fa7d8 00:29:07.335 [2024-04-26 09:03:24.457805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.457825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.466862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190fa7d8 00:29:07.335 [2024-04-26 09:03:24.467079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.467098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.476118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190fa7d8 00:29:07.335 [2024-04-26 09:03:24.476658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.476677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.485624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190fa7d8 00:29:07.335 [2024-04-26 09:03:24.485835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.485855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.495002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190fa7d8 00:29:07.335 [2024-04-26 09:03:24.495216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.495235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.504285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190fa7d8 00:29:07.335 [2024-04-26 09:03:24.504891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.504910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.335 [2024-04-26 09:03:24.513602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3d910) with pdu=0x2000190fa7d8 00:29:07.335 [2024-04-26 09:03:24.513913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.335 [2024-04-26 09:03:24.513932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.335 00:29:07.335 Latency(us) 00:29:07.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.335 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:07.335 nvme0n1 : 2.01 26408.33 103.16 0.00 0.00 4838.42 2791.83 29569.84 00:29:07.335 =================================================================================================================== 00:29:07.335 Total : 26408.33 103.16 0.00 0.00 4838.42 2791.83 29569.84 00:29:07.335 0 00:29:07.335 09:03:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:07.335 09:03:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:07.335 09:03:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:07.335 | .driver_specific 00:29:07.335 | .nvme_error 00:29:07.335 | .status_code 00:29:07.335 | .command_transient_transport_error' 00:29:07.335 09:03:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:07.594 09:03:24 -- host/digest.sh@71 -- # (( 207 > 0 )) 00:29:07.594 09:03:24 -- host/digest.sh@73 -- # killprocess 2225538 00:29:07.594 09:03:24 -- common/autotest_common.sh@936 -- # '[' -z 2225538 ']' 00:29:07.594 09:03:24 -- common/autotest_common.sh@940 -- # kill -0 2225538 00:29:07.594 09:03:24 -- common/autotest_common.sh@941 -- # uname 00:29:07.594 09:03:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:07.594 09:03:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2225538 00:29:07.594 09:03:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:07.594 09:03:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:07.594 09:03:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2225538' 00:29:07.594 killing process with pid 2225538 00:29:07.594 09:03:24 -- common/autotest_common.sh@955 -- # kill 2225538 00:29:07.594 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.594 00:29:07.594 Latency(us) 00:29:07.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.595 =================================================================================================================== 00:29:07.595 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.595 09:03:24 -- common/autotest_common.sh@960 -- # wait 2225538 00:29:07.853 09:03:24 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:07.853 09:03:24 -- host/digest.sh@54 -- # local rw bs qd 00:29:07.853 09:03:24 -- host/digest.sh@56 -- # rw=randwrite 00:29:07.853 09:03:24 -- host/digest.sh@56 -- # bs=131072 00:29:07.853 09:03:24 -- host/digest.sh@56 -- # qd=16 00:29:07.853 09:03:24 -- host/digest.sh@58 -- # bperfpid=2226136 00:29:07.853 09:03:24 -- host/digest.sh@60 -- # waitforlisten 2226136 /var/tmp/bperf.sock 00:29:07.853 09:03:24 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:07.853 09:03:24 -- common/autotest_common.sh@817 -- # '[' -z 2226136 ']' 00:29:07.853 09:03:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:07.853 09:03:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:07.854 09:03:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:07.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:07.854 09:03:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:07.854 09:03:24 -- common/autotest_common.sh@10 -- # set +x 00:29:07.854 [2024-04-26 09:03:25.021510] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:29:07.854 [2024-04-26 09:03:25.021562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226136 ] 00:29:07.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:07.854 Zero copy mechanism will not be used. 00:29:07.854 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.854 [2024-04-26 09:03:25.089519] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.111 [2024-04-26 09:03:25.152304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.676 09:03:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:08.676 09:03:25 -- common/autotest_common.sh@850 -- # return 0 00:29:08.676 09:03:25 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:08.676 09:03:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:08.934 09:03:25 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:08.934 09:03:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:08.934 09:03:25 -- common/autotest_common.sh@10 -- # set +x 00:29:08.934 09:03:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:08.934 09:03:25 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.934 09:03:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.191 nvme0n1 00:29:09.191 09:03:26 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:09.191 09:03:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:09.191 09:03:26 -- common/autotest_common.sh@10 -- # set +x 00:29:09.191 09:03:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:09.191 09:03:26 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:09.191 09:03:26 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.191 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.191 Zero copy mechanism will not be used. 00:29:09.191 Running I/O for 2 seconds... 00:29:09.191 [2024-04-26 09:03:26.411995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.191 [2024-04-26 09:03:26.412533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.191 [2024-04-26 09:03:26.412565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.191 [2024-04-26 09:03:26.430220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.191 [2024-04-26 09:03:26.430678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.191 [2024-04-26 09:03:26.430704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.449443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.450092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.450118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.470250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.470751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.470774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.490648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.491008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.491030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.510598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.511286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.511307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.531874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.532314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.532336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.551361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.551851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.551873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.571358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.571933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.571953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.591674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.592323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.592343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.617766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.618419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.618439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.637572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.638112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.638132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.656684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.657300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.657320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.675665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.675952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.675972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.450 [2024-04-26 09:03:26.695383] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.450 [2024-04-26 09:03:26.695802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.450 [2024-04-26 09:03:26.695829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.708 [2024-04-26 09:03:26.715607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.708 [2024-04-26 09:03:26.716263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.708 [2024-04-26 09:03:26.716286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.709 [2024-04-26 09:03:26.735337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.709 [2024-04-26 09:03:26.735893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.709 [2024-04-26 09:03:26.735914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.709 [2024-04-26 09:03:26.757359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.709 [2024-04-26 09:03:26.758086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.709 [2024-04-26 09:03:26.758107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.709 [2024-04-26 09:03:26.778183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.709 [2024-04-26 09:03:26.778842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.709 [2024-04-26 09:03:26.778862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.709 [2024-04-26 09:03:26.798338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.709 [2024-04-26 09:03:26.798865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.709 [2024-04-26 09:03:26.798885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.709 [2024-04-26 09:03:26.817859] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.709 [2024-04-26 09:03:26.818515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.709 [2024-04-26 09:03:26.818536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.709 [2024-04-26 09:03:26.837632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.709 [2024-04-26 09:03:26.838047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.709 [2024-04-26 09:03:26.838067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.709 [2024-04-26 09:03:26.857220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.709 [2024-04-26 09:03:26.857507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.709 [2024-04-26 09:03:26.857527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.709 [2024-04-26 09:03:26.877723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.709 [2024-04-26 09:03:26.878178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.709 [2024-04-26 09:03:26.878198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.709 [2024-04-26 09:03:26.896045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.709 [2024-04-26 09:03:26.896517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.709 [2024-04-26 09:03:26.896537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.709 [2024-04-26 09:03:26.915917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.709 [2024-04-26 09:03:26.916458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.709 [2024-04-26 09:03:26.916479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.709 [2024-04-26 09:03:26.936264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.709 [2024-04-26 09:03:26.936708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.709 [2024-04-26 09:03:26.936728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.967 [2024-04-26 09:03:26.956738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.967 [2024-04-26 09:03:26.957429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.967 [2024-04-26 09:03:26.957458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.967 [2024-04-26 09:03:26.977982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:26.978554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:26.978577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:26.997713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:26.998069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:26.998090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:27.015372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:27.015967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:27.015988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:27.034247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:27.034791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:27.034812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:27.055397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:27.056044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:27.056065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:27.074872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:27.075460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:27.075481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:27.094219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:27.094636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:27.094656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:27.114002] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:27.114515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:27.114535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:27.134077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:27.134758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:27.134778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:27.154037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:27.154735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:27.154755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:27.173851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:27.174474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:27.174494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:27.194465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:09.968 [2024-04-26 09:03:27.195089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.968 [2024-04-26 09:03:27.195109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.968 [2024-04-26 09:03:27.214190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.226 [2024-04-26 09:03:27.214725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.226 [2024-04-26 09:03:27.214753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.226 [2024-04-26 09:03:27.234593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.226 [2024-04-26 09:03:27.235131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.226 [2024-04-26 09:03:27.235154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.226 [2024-04-26 09:03:27.254371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.226 [2024-04-26 09:03:27.255049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.226 [2024-04-26 09:03:27.255070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.226 [2024-04-26 09:03:27.275344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.226 [2024-04-26 09:03:27.275889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.226 [2024-04-26 09:03:27.275910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.226 [2024-04-26 09:03:27.294349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.226 [2024-04-26 09:03:27.294888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.227 [2024-04-26 09:03:27.294908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.227 [2024-04-26 09:03:27.313540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.227 [2024-04-26 09:03:27.314047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.227 [2024-04-26 09:03:27.314067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.227 [2024-04-26 09:03:27.331581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.227 [2024-04-26 09:03:27.332089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.227 [2024-04-26 09:03:27.332109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.227 [2024-04-26 09:03:27.353123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.227 [2024-04-26 09:03:27.353778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.227 [2024-04-26 09:03:27.353799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.227 [2024-04-26 09:03:27.372900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.227 [2024-04-26 09:03:27.373461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.227 [2024-04-26 09:03:27.373481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.227 [2024-04-26 09:03:27.391785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.227 [2024-04-26 09:03:27.392393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.227 [2024-04-26 09:03:27.392413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.227 [2024-04-26 09:03:27.409176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.227 [2024-04-26 09:03:27.409720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.227 [2024-04-26 09:03:27.409740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.227 [2024-04-26 09:03:27.428994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.227 [2024-04-26 09:03:27.429598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.227 [2024-04-26 09:03:27.429618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.227 [2024-04-26 09:03:27.447824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.227 [2024-04-26 09:03:27.448428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.227 [2024-04-26 09:03:27.448453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.227 [2024-04-26 09:03:27.466776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.227 [2024-04-26 09:03:27.467184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.227 [2024-04-26 09:03:27.467204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.485 [2024-04-26 09:03:27.485849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.485 [2024-04-26 09:03:27.486496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.485 [2024-04-26 09:03:27.486520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.485 [2024-04-26 09:03:27.506266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.485 [2024-04-26 09:03:27.506854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.485 [2024-04-26 09:03:27.506875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.485 [2024-04-26 09:03:27.526125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.485 [2024-04-26 09:03:27.526798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.485 [2024-04-26 09:03:27.526819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.485 [2024-04-26 09:03:27.545260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.485 [2024-04-26 09:03:27.545948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.485 [2024-04-26 09:03:27.545968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.485 [2024-04-26 09:03:27.565176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.485 [2024-04-26 09:03:27.565806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.485 [2024-04-26 09:03:27.565827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.485 [2024-04-26 09:03:27.584351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.485 [2024-04-26 09:03:27.584816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.485 [2024-04-26 09:03:27.584836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.485 [2024-04-26 09:03:27.603199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.486 [2024-04-26 09:03:27.603879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.486 [2024-04-26 09:03:27.603899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.486 [2024-04-26 09:03:27.622434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.486 [2024-04-26 09:03:27.622972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.486 [2024-04-26 09:03:27.622992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.486 [2024-04-26 09:03:27.641565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.486 [2024-04-26 09:03:27.642240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.486 [2024-04-26 09:03:27.642260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.486 [2024-04-26 09:03:27.660206] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.486 [2024-04-26 09:03:27.660960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.486 [2024-04-26 09:03:27.660980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.486 [2024-04-26 09:03:27.679963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.486 [2024-04-26 09:03:27.680624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.486 [2024-04-26 09:03:27.680644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.486 [2024-04-26 09:03:27.700169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.486 [2024-04-26 09:03:27.700534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.486 [2024-04-26 09:03:27.700553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.486 [2024-04-26 09:03:27.719699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.486 [2024-04-26 09:03:27.720220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.486 [2024-04-26 09:03:27.720249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.739233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.739649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.739673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.759765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.760273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.760294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.779534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.779979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.779999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.797197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.797804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.797824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.816139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.816628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.816649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.837169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.837832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.837852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.857222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.857892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.857912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.877322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.878015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.878035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.897366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.897961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.897982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.917471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.918134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.918154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.937891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.938357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.938377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.957910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.958596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.744 [2024-04-26 09:03:27.958617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.744 [2024-04-26 09:03:27.976155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:10.744 [2024-04-26 09:03:27.976745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.745 [2024-04-26 09:03:27.976765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:27.993987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:27.994534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:27.994557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.012090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.012703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.012725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.031888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.032424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.032445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.051519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.052042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.052062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.069978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.070677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.070706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.088348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.089024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.089045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.105808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.106551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.106571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.124797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.125476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.125497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.143236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.143891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.143911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.159408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.159882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.159903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.179330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.179940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.179960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.198250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.198799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.198819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.216919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.217579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.217603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.004 [2024-04-26 09:03:28.236848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.004 [2024-04-26 09:03:28.237383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.004 [2024-04-26 09:03:28.237404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.264 [2024-04-26 09:03:28.255212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.264 [2024-04-26 09:03:28.255650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.264 [2024-04-26 09:03:28.255673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.264 [2024-04-26 09:03:28.274832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.264 [2024-04-26 09:03:28.275391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.264 [2024-04-26 09:03:28.275412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.264 [2024-04-26 09:03:28.295707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.264 [2024-04-26 09:03:28.296325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.264 [2024-04-26 09:03:28.296346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.264 [2024-04-26 09:03:28.315858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.264 [2024-04-26 09:03:28.316483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.264 [2024-04-26 09:03:28.316503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.264 [2024-04-26 09:03:28.337312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.264 [2024-04-26 09:03:28.338118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.264 [2024-04-26 09:03:28.338139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.264 [2024-04-26 09:03:28.356944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.264 [2024-04-26 09:03:28.357540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.264 [2024-04-26 09:03:28.357560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.264 [2024-04-26 09:03:28.376227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb3dbe0) with pdu=0x2000190fef90 00:29:11.264 [2024-04-26 09:03:28.376828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.264 [2024-04-26 09:03:28.376848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.264 00:29:11.264 Latency(us) 00:29:11.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.264 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:11.264 nvme0n1 : 2.01 1563.37 195.42 0.00 0.00 10209.53 6396.31 32296.14 00:29:11.264 =================================================================================================================== 00:29:11.264 Total : 1563.37 195.42 0.00 0.00 10209.53 6396.31 32296.14 00:29:11.264 0 00:29:11.264 09:03:28 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:11.264 09:03:28 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:11.264 09:03:28 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:11.264 | .driver_specific 00:29:11.264 | .nvme_error 00:29:11.264 | .status_code 00:29:11.264 | .command_transient_transport_error' 00:29:11.264 09:03:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:11.523 09:03:28 -- host/digest.sh@71 -- # (( 101 > 0 )) 00:29:11.523 09:03:28 -- host/digest.sh@73 -- # killprocess 2226136 00:29:11.523 09:03:28 -- common/autotest_common.sh@936 -- # '[' -z 2226136 ']' 00:29:11.523 09:03:28 -- common/autotest_common.sh@940 -- # kill -0 2226136 00:29:11.523 09:03:28 -- common/autotest_common.sh@941 -- # uname 00:29:11.523 09:03:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:11.523 09:03:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2226136 00:29:11.523 09:03:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:11.523 09:03:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:11.523 09:03:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2226136' 00:29:11.523 killing process with pid 2226136 00:29:11.523 09:03:28 -- common/autotest_common.sh@955 -- # kill 2226136 00:29:11.523 Received shutdown signal, test time was about 2.000000 seconds 00:29:11.523 00:29:11.523 Latency(us) 00:29:11.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.523 =================================================================================================================== 00:29:11.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.523 09:03:28 -- common/autotest_common.sh@960 -- # wait 2226136 00:29:11.784 09:03:28 -- host/digest.sh@116 -- # killprocess 2223949 00:29:11.784 09:03:28 -- common/autotest_common.sh@936 -- # '[' -z 2223949 ']' 00:29:11.784 09:03:28 -- common/autotest_common.sh@940 -- # kill -0 2223949 00:29:11.784 09:03:28 -- common/autotest_common.sh@941 -- # uname 00:29:11.784 09:03:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:11.784 09:03:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2223949 00:29:11.784 09:03:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:11.784 09:03:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:11.784 09:03:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2223949' 00:29:11.784 killing process with pid 2223949 00:29:11.784 09:03:28 -- common/autotest_common.sh@955 -- # kill 2223949 00:29:11.784 09:03:28 -- common/autotest_common.sh@960 -- # wait 2223949 00:29:12.043 00:29:12.043 real 0m16.964s 00:29:12.043 user 0m32.612s 00:29:12.043 sys 0m4.398s 00:29:12.043 09:03:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:12.043 09:03:29 -- common/autotest_common.sh@10 -- # set +x 00:29:12.043 ************************************ 00:29:12.043 END TEST nvmf_digest_error 00:29:12.043 ************************************ 00:29:12.043 09:03:29 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:12.043 09:03:29 -- host/digest.sh@150 -- # nvmftestfini 00:29:12.043 09:03:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:12.043 09:03:29 -- nvmf/common.sh@117 -- # sync 00:29:12.043 09:03:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:12.043 09:03:29 -- nvmf/common.sh@120 -- # set +e 00:29:12.043 09:03:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:12.043 09:03:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:12.043 rmmod nvme_tcp 00:29:12.043 rmmod nvme_fabrics 00:29:12.043 rmmod nvme_keyring 00:29:12.044 09:03:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:12.044 09:03:29 -- nvmf/common.sh@124 -- # set -e 00:29:12.044 09:03:29 -- nvmf/common.sh@125 -- # return 0 00:29:12.044 09:03:29 -- nvmf/common.sh@478 -- # '[' -n 2223949 ']' 00:29:12.044 09:03:29 -- nvmf/common.sh@479 -- # killprocess 2223949 00:29:12.044 09:03:29 -- common/autotest_common.sh@936 -- # '[' -z 2223949 ']' 00:29:12.044 09:03:29 -- common/autotest_common.sh@940 -- # kill -0 2223949 00:29:12.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2223949) - No such process 00:29:12.044 09:03:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2223949 is not found' 00:29:12.044 Process with pid 2223949 is not found 00:29:12.044 09:03:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:12.044 09:03:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:12.044 09:03:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:12.044 09:03:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:12.044 09:03:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:12.044 09:03:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.044 09:03:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:12.044 09:03:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.578 09:03:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:14.578 00:29:14.578 real 0m43.548s 00:29:14.578 user 1m7.184s 00:29:14.578 sys 0m14.432s 00:29:14.578 09:03:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:14.578 09:03:31 -- common/autotest_common.sh@10 -- # set +x 00:29:14.578 ************************************ 00:29:14.578 END TEST nvmf_digest 00:29:14.578 ************************************ 00:29:14.578 09:03:31 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:29:14.578 09:03:31 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:29:14.578 09:03:31 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:29:14.578 09:03:31 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:14.578 09:03:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:14.578 09:03:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:14.578 09:03:31 -- common/autotest_common.sh@10 -- # set +x 00:29:14.578 ************************************ 00:29:14.578 START TEST nvmf_bdevperf 00:29:14.578 ************************************ 00:29:14.578 09:03:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:14.578 * Looking for test storage... 00:29:14.578 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:14.578 09:03:31 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.578 09:03:31 -- nvmf/common.sh@7 -- # uname -s 00:29:14.578 09:03:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.578 09:03:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.578 09:03:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.578 09:03:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.578 09:03:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.578 09:03:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.578 09:03:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.578 09:03:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.578 09:03:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.578 09:03:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.578 09:03:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:14.578 09:03:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:14.578 09:03:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.578 09:03:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.578 09:03:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.578 09:03:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.578 09:03:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.578 09:03:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.578 09:03:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.578 09:03:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.578 09:03:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.578 09:03:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.578 09:03:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.578 09:03:31 -- paths/export.sh@5 -- # export PATH 00:29:14.579 09:03:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.579 09:03:31 -- nvmf/common.sh@47 -- # : 0 00:29:14.579 09:03:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:14.579 09:03:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:14.579 09:03:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.579 09:03:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.579 09:03:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.579 09:03:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:14.579 09:03:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:14.579 09:03:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:14.579 09:03:31 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:14.579 09:03:31 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:14.579 09:03:31 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:14.579 09:03:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:14.579 09:03:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.579 09:03:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:14.579 09:03:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:14.579 09:03:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:14.579 09:03:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.579 09:03:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:14.579 09:03:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.579 09:03:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:14.579 09:03:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:14.579 09:03:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:14.579 09:03:31 -- common/autotest_common.sh@10 -- # set +x 00:29:21.140 09:03:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:21.140 09:03:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:21.140 09:03:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:21.140 09:03:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:21.140 09:03:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:21.140 09:03:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:21.140 09:03:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:21.140 09:03:37 -- nvmf/common.sh@295 -- # net_devs=() 00:29:21.140 09:03:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:21.140 09:03:37 -- nvmf/common.sh@296 -- # e810=() 00:29:21.140 09:03:37 -- nvmf/common.sh@296 -- # local -ga e810 00:29:21.140 09:03:37 -- nvmf/common.sh@297 -- # x722=() 00:29:21.140 09:03:37 -- nvmf/common.sh@297 -- # local -ga x722 00:29:21.140 09:03:37 -- nvmf/common.sh@298 -- # mlx=() 00:29:21.140 09:03:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:21.140 09:03:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.140 09:03:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.140 09:03:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.140 09:03:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.140 09:03:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.140 09:03:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.140 09:03:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.140 09:03:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.140 09:03:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.140 09:03:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.140 09:03:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.140 09:03:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:21.140 09:03:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:21.140 09:03:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:21.140 09:03:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:21.140 09:03:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:21.140 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:21.140 09:03:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:21.140 09:03:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:21.140 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:21.140 09:03:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:21.140 09:03:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:21.140 09:03:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.140 09:03:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:21.140 09:03:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.140 09:03:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:21.140 Found net devices under 0000:af:00.0: cvl_0_0 00:29:21.140 09:03:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.140 09:03:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:21.140 09:03:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.140 09:03:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:21.140 09:03:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.140 09:03:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:21.140 Found net devices under 0000:af:00.1: cvl_0_1 00:29:21.140 09:03:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.140 09:03:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:21.140 09:03:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:21.140 09:03:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:21.140 09:03:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:21.140 09:03:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.140 09:03:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.140 09:03:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.140 09:03:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:21.140 09:03:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.140 09:03:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.140 09:03:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:21.140 09:03:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.140 09:03:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.140 09:03:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:21.140 09:03:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:21.140 09:03:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.140 09:03:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.140 09:03:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.140 09:03:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.140 09:03:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:21.140 09:03:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.140 09:03:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.140 09:03:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.140 09:03:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:21.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:29:21.140 00:29:21.140 --- 10.0.0.2 ping statistics --- 00:29:21.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.140 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:29:21.140 09:03:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:29:21.140 00:29:21.140 --- 10.0.0.1 ping statistics --- 00:29:21.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.140 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:29:21.140 09:03:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.140 09:03:38 -- nvmf/common.sh@411 -- # return 0 00:29:21.140 09:03:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:21.140 09:03:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.140 09:03:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:21.140 09:03:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:21.140 09:03:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.140 09:03:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:21.140 09:03:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:21.140 09:03:38 -- host/bdevperf.sh@25 -- # tgt_init 00:29:21.140 09:03:38 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:21.140 09:03:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:21.140 09:03:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:21.140 09:03:38 -- common/autotest_common.sh@10 -- # set +x 00:29:21.140 09:03:38 -- nvmf/common.sh@470 -- # nvmfpid=2230433 00:29:21.140 09:03:38 -- nvmf/common.sh@471 -- # waitforlisten 2230433 00:29:21.140 09:03:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:21.140 09:03:38 -- common/autotest_common.sh@817 -- # '[' -z 2230433 ']' 00:29:21.140 09:03:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.140 09:03:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:21.140 09:03:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.140 09:03:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:21.140 09:03:38 -- common/autotest_common.sh@10 -- # set +x 00:29:21.140 [2024-04-26 09:03:38.214494] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:29:21.140 [2024-04-26 09:03:38.214548] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.140 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.140 [2024-04-26 09:03:38.289792] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:21.140 [2024-04-26 09:03:38.357136] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.140 [2024-04-26 09:03:38.357177] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.140 [2024-04-26 09:03:38.357186] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.140 [2024-04-26 09:03:38.357198] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.140 [2024-04-26 09:03:38.357206] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.140 [2024-04-26 09:03:38.357461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.140 [2024-04-26 09:03:38.357518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.140 [2024-04-26 09:03:38.357522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.075 09:03:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:22.075 09:03:39 -- common/autotest_common.sh@850 -- # return 0 00:29:22.075 09:03:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:22.075 09:03:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:22.075 09:03:39 -- common/autotest_common.sh@10 -- # set +x 00:29:22.075 09:03:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.075 09:03:39 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.075 09:03:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.075 09:03:39 -- common/autotest_common.sh@10 -- # set +x 00:29:22.075 [2024-04-26 09:03:39.069935] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.075 09:03:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.075 09:03:39 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:22.075 09:03:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.075 09:03:39 -- common/autotest_common.sh@10 -- # set +x 00:29:22.075 Malloc0 00:29:22.075 09:03:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.075 09:03:39 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:22.075 09:03:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.075 09:03:39 -- common/autotest_common.sh@10 -- # set +x 00:29:22.075 09:03:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.075 09:03:39 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:22.075 09:03:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.075 09:03:39 -- common/autotest_common.sh@10 -- # set +x 00:29:22.075 09:03:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.075 09:03:39 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.075 09:03:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.075 09:03:39 -- common/autotest_common.sh@10 -- # set +x 00:29:22.075 [2024-04-26 09:03:39.130765] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.075 09:03:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.075 09:03:39 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:22.075 09:03:39 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:22.075 09:03:39 -- nvmf/common.sh@521 -- # config=() 00:29:22.075 09:03:39 -- nvmf/common.sh@521 -- # local subsystem config 00:29:22.075 09:03:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:22.075 09:03:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:22.075 { 00:29:22.075 "params": { 00:29:22.075 "name": "Nvme$subsystem", 00:29:22.076 "trtype": "$TEST_TRANSPORT", 00:29:22.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:22.076 "adrfam": "ipv4", 00:29:22.076 "trsvcid": "$NVMF_PORT", 00:29:22.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:22.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:22.076 "hdgst": ${hdgst:-false}, 00:29:22.076 "ddgst": ${ddgst:-false} 00:29:22.076 }, 00:29:22.076 "method": "bdev_nvme_attach_controller" 00:29:22.076 } 00:29:22.076 EOF 00:29:22.076 )") 00:29:22.076 09:03:39 -- nvmf/common.sh@543 -- # cat 00:29:22.076 09:03:39 -- nvmf/common.sh@545 -- # jq . 00:29:22.076 09:03:39 -- nvmf/common.sh@546 -- # IFS=, 00:29:22.076 09:03:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:22.076 "params": { 00:29:22.076 "name": "Nvme1", 00:29:22.076 "trtype": "tcp", 00:29:22.076 "traddr": "10.0.0.2", 00:29:22.076 "adrfam": "ipv4", 00:29:22.076 "trsvcid": "4420", 00:29:22.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:22.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:22.076 "hdgst": false, 00:29:22.076 "ddgst": false 00:29:22.076 }, 00:29:22.076 "method": "bdev_nvme_attach_controller" 00:29:22.076 }' 00:29:22.076 [2024-04-26 09:03:39.183801] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:29:22.076 [2024-04-26 09:03:39.183847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230673 ] 00:29:22.076 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.076 [2024-04-26 09:03:39.253351] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.076 [2024-04-26 09:03:39.320909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.645 Running I/O for 1 seconds... 00:29:23.581 00:29:23.581 Latency(us) 00:29:23.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.581 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:23.581 Verification LBA range: start 0x0 length 0x4000 00:29:23.581 Nvme1n1 : 1.00 11469.73 44.80 0.00 0.00 11113.19 1664.61 21705.52 00:29:23.581 =================================================================================================================== 00:29:23.581 Total : 11469.73 44.80 0.00 0.00 11113.19 1664.61 21705.52 00:29:23.839 09:03:40 -- host/bdevperf.sh@30 -- # bdevperfpid=2230946 00:29:23.839 09:03:40 -- host/bdevperf.sh@32 -- # sleep 3 00:29:23.839 09:03:40 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:23.839 09:03:40 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:23.839 09:03:40 -- nvmf/common.sh@521 -- # config=() 00:29:23.839 09:03:40 -- nvmf/common.sh@521 -- # local subsystem config 00:29:23.839 09:03:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:23.839 09:03:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:23.839 { 00:29:23.839 "params": { 00:29:23.839 "name": "Nvme$subsystem", 00:29:23.839 "trtype": "$TEST_TRANSPORT", 00:29:23.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.839 "adrfam": "ipv4", 00:29:23.839 "trsvcid": "$NVMF_PORT", 00:29:23.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.839 "hdgst": ${hdgst:-false}, 00:29:23.839 "ddgst": ${ddgst:-false} 00:29:23.839 }, 00:29:23.839 "method": "bdev_nvme_attach_controller" 00:29:23.839 } 00:29:23.839 EOF 00:29:23.839 )") 00:29:23.839 09:03:40 -- nvmf/common.sh@543 -- # cat 00:29:23.839 09:03:40 -- nvmf/common.sh@545 -- # jq . 00:29:23.839 09:03:40 -- nvmf/common.sh@546 -- # IFS=, 00:29:23.839 09:03:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:23.839 "params": { 00:29:23.839 "name": "Nvme1", 00:29:23.839 "trtype": "tcp", 00:29:23.839 "traddr": "10.0.0.2", 00:29:23.839 "adrfam": "ipv4", 00:29:23.839 "trsvcid": "4420", 00:29:23.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:23.839 "hdgst": false, 00:29:23.839 "ddgst": false 00:29:23.839 }, 00:29:23.839 "method": "bdev_nvme_attach_controller" 00:29:23.839 }' 00:29:23.839 [2024-04-26 09:03:40.897607] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:29:23.839 [2024-04-26 09:03:40.897666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230946 ] 00:29:23.839 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.839 [2024-04-26 09:03:40.968489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.839 [2024-04-26 09:03:41.037195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.097 Running I/O for 15 seconds... 00:29:26.639 09:03:43 -- host/bdevperf.sh@33 -- # kill -9 2230433 00:29:26.639 09:03:43 -- host/bdevperf.sh@35 -- # sleep 3 00:29:26.639 [2024-04-26 09:03:43.869271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.639 [2024-04-26 09:03:43.869797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.639 [2024-04-26 09:03:43.869809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.869819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.869830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.869839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.869850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.869859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.869870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.869879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.869890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.869899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.869910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.869919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.869929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.869938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.869949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.869958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.869969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.869978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.869989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.869998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.870022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.870043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.870064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.870083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.640 [2024-04-26 09:03:43.870104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.640 [2024-04-26 09:03:43.870578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.640 [2024-04-26 09:03:43.870587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.870905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.870925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.870944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.870964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.870984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.870995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.871004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.871024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.871045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.871065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.871084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.871105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.871124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.871144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.871164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.871184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.871204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.641 [2024-04-26 09:03:43.871223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.871243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.871262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.871283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.871303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.871322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.871341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.641 [2024-04-26 09:03:43.871352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.641 [2024-04-26 09:03:43.871362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:26.642 [2024-04-26 09:03:43.871864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.871986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.871996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.642 [2024-04-26 09:03:43.872005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.872015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2064b00 is same with the state(5) to be set 00:29:26.642 [2024-04-26 09:03:43.872027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:26.642 [2024-04-26 09:03:43.872035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:26.642 [2024-04-26 09:03:43.872045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101616 len:8 PRP1 0x0 PRP2 0x0 00:29:26.642 [2024-04-26 09:03:43.872054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.642 [2024-04-26 09:03:43.872102] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2064b00 was disconnected and freed. reset controller. 00:29:26.642 [2024-04-26 09:03:43.874771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.642 [2024-04-26 09:03:43.874825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.642 [2024-04-26 09:03:43.875609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.642 [2024-04-26 09:03:43.876043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.642 [2024-04-26 09:03:43.876055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.642 [2024-04-26 09:03:43.876065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.642 [2024-04-26 09:03:43.876237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.642 [2024-04-26 09:03:43.876406] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.643 [2024-04-26 09:03:43.876416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.643 [2024-04-26 09:03:43.876426] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.643 [2024-04-26 09:03:43.879087] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.903 [2024-04-26 09:03:43.887941] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.903 [2024-04-26 09:03:43.888655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.889162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.889175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.903 [2024-04-26 09:03:43.889184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.903 [2024-04-26 09:03:43.889350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.903 [2024-04-26 09:03:43.889519] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.903 [2024-04-26 09:03:43.889530] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.903 [2024-04-26 09:03:43.889539] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.903 [2024-04-26 09:03:43.892176] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.903 [2024-04-26 09:03:43.900720] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.903 [2024-04-26 09:03:43.901426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.902010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.902052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.903 [2024-04-26 09:03:43.902085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.903 [2024-04-26 09:03:43.902523] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.903 [2024-04-26 09:03:43.902765] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.903 [2024-04-26 09:03:43.902780] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.903 [2024-04-26 09:03:43.902792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.903 [2024-04-26 09:03:43.906522] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.903 [2024-04-26 09:03:43.914376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.903 [2024-04-26 09:03:43.915072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.915582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.915625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.903 [2024-04-26 09:03:43.915659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.903 [2024-04-26 09:03:43.916057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.903 [2024-04-26 09:03:43.916222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.903 [2024-04-26 09:03:43.916233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.903 [2024-04-26 09:03:43.916242] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.903 [2024-04-26 09:03:43.918831] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.903 [2024-04-26 09:03:43.927151] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.903 [2024-04-26 09:03:43.927864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.928470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.928512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.903 [2024-04-26 09:03:43.928544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.903 [2024-04-26 09:03:43.929029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.903 [2024-04-26 09:03:43.929195] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.903 [2024-04-26 09:03:43.929205] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.903 [2024-04-26 09:03:43.929214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.903 [2024-04-26 09:03:43.931744] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.903 [2024-04-26 09:03:43.939870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.903 [2024-04-26 09:03:43.940530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.941010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.941022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.903 [2024-04-26 09:03:43.941032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.903 [2024-04-26 09:03:43.941196] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.903 [2024-04-26 09:03:43.941360] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.903 [2024-04-26 09:03:43.941370] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.903 [2024-04-26 09:03:43.941382] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.903 [2024-04-26 09:03:43.943913] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.903 [2024-04-26 09:03:43.952711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.903 [2024-04-26 09:03:43.953393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.953978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.903 [2024-04-26 09:03:43.954020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.903 [2024-04-26 09:03:43.954053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.903 [2024-04-26 09:03:43.954653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.903 [2024-04-26 09:03:43.955244] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.903 [2024-04-26 09:03:43.955278] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.903 [2024-04-26 09:03:43.955309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.903 [2024-04-26 09:03:43.957980] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.904 [2024-04-26 09:03:43.965359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.904 [2024-04-26 09:03:43.966040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:43.966630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:43.966672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.904 [2024-04-26 09:03:43.966704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.904 [2024-04-26 09:03:43.967066] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.904 [2024-04-26 09:03:43.967232] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.904 [2024-04-26 09:03:43.967242] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.904 [2024-04-26 09:03:43.967250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.904 [2024-04-26 09:03:43.969771] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.904 [2024-04-26 09:03:43.978061] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.904 [2024-04-26 09:03:43.978755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:43.979338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:43.979378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.904 [2024-04-26 09:03:43.979411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.904 [2024-04-26 09:03:43.980016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.904 [2024-04-26 09:03:43.980318] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.904 [2024-04-26 09:03:43.980328] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.904 [2024-04-26 09:03:43.980340] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.904 [2024-04-26 09:03:43.982904] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.904 [2024-04-26 09:03:43.990796] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.904 [2024-04-26 09:03:43.991469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:43.992040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:43.992081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.904 [2024-04-26 09:03:43.992113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.904 [2024-04-26 09:03:43.992702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.904 [2024-04-26 09:03:43.992939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.904 [2024-04-26 09:03:43.992954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.904 [2024-04-26 09:03:43.992966] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.904 [2024-04-26 09:03:43.996686] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.904 [2024-04-26 09:03:44.004429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.904 [2024-04-26 09:03:44.005096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.005595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.005638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.904 [2024-04-26 09:03:44.005671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.904 [2024-04-26 09:03:44.006248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.904 [2024-04-26 09:03:44.006414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.904 [2024-04-26 09:03:44.006424] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.904 [2024-04-26 09:03:44.006433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.904 [2024-04-26 09:03:44.008963] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.904 [2024-04-26 09:03:44.017140] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.904 [2024-04-26 09:03:44.017813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.018396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.018436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.904 [2024-04-26 09:03:44.018483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.904 [2024-04-26 09:03:44.019075] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.904 [2024-04-26 09:03:44.019241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.904 [2024-04-26 09:03:44.019251] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.904 [2024-04-26 09:03:44.019260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.904 [2024-04-26 09:03:44.021789] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.904 [2024-04-26 09:03:44.029853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.904 [2024-04-26 09:03:44.030550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.031090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.031130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.904 [2024-04-26 09:03:44.031163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.904 [2024-04-26 09:03:44.031570] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.904 [2024-04-26 09:03:44.031735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.904 [2024-04-26 09:03:44.031746] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.904 [2024-04-26 09:03:44.031754] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.904 [2024-04-26 09:03:44.034268] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.904 [2024-04-26 09:03:44.042591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.904 [2024-04-26 09:03:44.043279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.043838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.043879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.904 [2024-04-26 09:03:44.043911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.904 [2024-04-26 09:03:44.044355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.904 [2024-04-26 09:03:44.044523] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.904 [2024-04-26 09:03:44.044532] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.904 [2024-04-26 09:03:44.044541] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.904 [2024-04-26 09:03:44.047124] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.904 [2024-04-26 09:03:44.055563] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.904 [2024-04-26 09:03:44.056160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.056649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.056662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.904 [2024-04-26 09:03:44.056671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.904 [2024-04-26 09:03:44.056841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.904 [2024-04-26 09:03:44.057012] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.904 [2024-04-26 09:03:44.057022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.904 [2024-04-26 09:03:44.057031] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.904 [2024-04-26 09:03:44.059694] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.904 [2024-04-26 09:03:44.068610] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.904 [2024-04-26 09:03:44.069322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.069838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.069880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.904 [2024-04-26 09:03:44.069912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.904 [2024-04-26 09:03:44.070429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.904 [2024-04-26 09:03:44.070604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.904 [2024-04-26 09:03:44.070615] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.904 [2024-04-26 09:03:44.070624] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.904 [2024-04-26 09:03:44.073319] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.904 [2024-04-26 09:03:44.081433] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.904 [2024-04-26 09:03:44.082138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.082725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.904 [2024-04-26 09:03:44.082767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.905 [2024-04-26 09:03:44.082799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.905 [2024-04-26 09:03:44.083388] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.905 [2024-04-26 09:03:44.083558] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.905 [2024-04-26 09:03:44.083568] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.905 [2024-04-26 09:03:44.083577] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.905 [2024-04-26 09:03:44.086099] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.905 [2024-04-26 09:03:44.094212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.905 [2024-04-26 09:03:44.094830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.905 [2024-04-26 09:03:44.095359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.905 [2024-04-26 09:03:44.095400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.905 [2024-04-26 09:03:44.095433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.905 [2024-04-26 09:03:44.095682] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.905 [2024-04-26 09:03:44.095847] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.905 [2024-04-26 09:03:44.095858] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.905 [2024-04-26 09:03:44.095866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.905 [2024-04-26 09:03:44.098388] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.905 [2024-04-26 09:03:44.106992] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.905 [2024-04-26 09:03:44.107673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.905 [2024-04-26 09:03:44.108151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.905 [2024-04-26 09:03:44.108163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.905 [2024-04-26 09:03:44.108173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.905 [2024-04-26 09:03:44.108338] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.905 [2024-04-26 09:03:44.108508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.905 [2024-04-26 09:03:44.108519] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.905 [2024-04-26 09:03:44.108528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.905 [2024-04-26 09:03:44.111045] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.905 [2024-04-26 09:03:44.119669] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.905 [2024-04-26 09:03:44.120388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.905 [2024-04-26 09:03:44.120943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.905 [2024-04-26 09:03:44.120984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.905 [2024-04-26 09:03:44.121016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.905 [2024-04-26 09:03:44.121404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.905 [2024-04-26 09:03:44.121580] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.905 [2024-04-26 09:03:44.121591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.905 [2024-04-26 09:03:44.121599] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.905 [2024-04-26 09:03:44.124239] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.905 [2024-04-26 09:03:44.132353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.905 [2024-04-26 09:03:44.133050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.905 [2024-04-26 09:03:44.133572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.905 [2024-04-26 09:03:44.133585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.905 [2024-04-26 09:03:44.133594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.905 [2024-04-26 09:03:44.133759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.905 [2024-04-26 09:03:44.133924] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.905 [2024-04-26 09:03:44.133934] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.905 [2024-04-26 09:03:44.133943] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.905 [2024-04-26 09:03:44.136466] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.905 [2024-04-26 09:03:44.145199] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.905 [2024-04-26 09:03:44.145850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.905 [2024-04-26 09:03:44.146366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.905 [2024-04-26 09:03:44.146378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:26.905 [2024-04-26 09:03:44.146390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:26.905 [2024-04-26 09:03:44.146566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:26.905 [2024-04-26 09:03:44.146736] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.905 [2024-04-26 09:03:44.146746] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.905 [2024-04-26 09:03:44.146755] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.166 [2024-04-26 09:03:44.149436] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.166 [2024-04-26 09:03:44.158013] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.166 [2024-04-26 09:03:44.158690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.159200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.159240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.166 [2024-04-26 09:03:44.159272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.166 [2024-04-26 09:03:44.159663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.166 [2024-04-26 09:03:44.159833] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.166 [2024-04-26 09:03:44.159844] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.166 [2024-04-26 09:03:44.159853] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.166 [2024-04-26 09:03:44.162399] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.166 [2024-04-26 09:03:44.170722] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.166 [2024-04-26 09:03:44.171423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.171969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.172010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.166 [2024-04-26 09:03:44.172042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.166 [2024-04-26 09:03:44.172644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.166 [2024-04-26 09:03:44.172896] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.166 [2024-04-26 09:03:44.172906] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.166 [2024-04-26 09:03:44.172915] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.166 [2024-04-26 09:03:44.175438] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.166 [2024-04-26 09:03:44.183362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.166 [2024-04-26 09:03:44.184048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.184639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.184681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.166 [2024-04-26 09:03:44.184721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.166 [2024-04-26 09:03:44.185086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.166 [2024-04-26 09:03:44.185243] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.166 [2024-04-26 09:03:44.185252] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.166 [2024-04-26 09:03:44.185261] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.166 [2024-04-26 09:03:44.188569] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.166 [2024-04-26 09:03:44.196725] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.166 [2024-04-26 09:03:44.197396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.197995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.198037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.166 [2024-04-26 09:03:44.198069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.166 [2024-04-26 09:03:44.198550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.166 [2024-04-26 09:03:44.198716] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.166 [2024-04-26 09:03:44.198727] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.166 [2024-04-26 09:03:44.198735] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.166 [2024-04-26 09:03:44.201248] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.166 [2024-04-26 09:03:44.209373] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.166 [2024-04-26 09:03:44.210069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.210643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.210656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.166 [2024-04-26 09:03:44.210665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.166 [2024-04-26 09:03:44.210829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.166 [2024-04-26 09:03:44.210994] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.166 [2024-04-26 09:03:44.211004] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.166 [2024-04-26 09:03:44.211013] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.166 [2024-04-26 09:03:44.213537] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.166 [2024-04-26 09:03:44.222172] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.166 [2024-04-26 09:03:44.222795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.223299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.223339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.166 [2024-04-26 09:03:44.223371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.166 [2024-04-26 09:03:44.223873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.166 [2024-04-26 09:03:44.224040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.166 [2024-04-26 09:03:44.224051] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.166 [2024-04-26 09:03:44.224059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.166 [2024-04-26 09:03:44.226636] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.166 [2024-04-26 09:03:44.234904] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.166 [2024-04-26 09:03:44.235589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.236079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.236119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.166 [2024-04-26 09:03:44.236153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.166 [2024-04-26 09:03:44.236614] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.166 [2024-04-26 09:03:44.236772] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.166 [2024-04-26 09:03:44.236783] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.166 [2024-04-26 09:03:44.236792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.166 [2024-04-26 09:03:44.239296] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.166 [2024-04-26 09:03:44.247708] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.166 [2024-04-26 09:03:44.248444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.249072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.166 [2024-04-26 09:03:44.249113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.166 [2024-04-26 09:03:44.249145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.166 [2024-04-26 09:03:44.249614] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.166 [2024-04-26 09:03:44.249773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.166 [2024-04-26 09:03:44.249783] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.166 [2024-04-26 09:03:44.249792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.166 [2024-04-26 09:03:44.252358] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.166 [2024-04-26 09:03:44.260488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.166 [2024-04-26 09:03:44.261201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.261703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.261718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.167 [2024-04-26 09:03:44.261727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.167 [2024-04-26 09:03:44.261885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.167 [2024-04-26 09:03:44.262047] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.167 [2024-04-26 09:03:44.262058] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.167 [2024-04-26 09:03:44.262067] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.167 [2024-04-26 09:03:44.264592] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.167 [2024-04-26 09:03:44.273303] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.167 [2024-04-26 09:03:44.273997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.274582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.274625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.167 [2024-04-26 09:03:44.274660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.167 [2024-04-26 09:03:44.275021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.167 [2024-04-26 09:03:44.275179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.167 [2024-04-26 09:03:44.275190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.167 [2024-04-26 09:03:44.275199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.167 [2024-04-26 09:03:44.277739] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.167 [2024-04-26 09:03:44.286057] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.167 [2024-04-26 09:03:44.286744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.287312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.287352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.167 [2024-04-26 09:03:44.287385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.167 [2024-04-26 09:03:44.287984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.167 [2024-04-26 09:03:44.288303] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.167 [2024-04-26 09:03:44.288314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.167 [2024-04-26 09:03:44.288323] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.167 [2024-04-26 09:03:44.290793] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.167 [2024-04-26 09:03:44.298793] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.167 [2024-04-26 09:03:44.299500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.300055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.300096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.167 [2024-04-26 09:03:44.300129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.167 [2024-04-26 09:03:44.300728] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.167 [2024-04-26 09:03:44.301160] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.167 [2024-04-26 09:03:44.301174] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.167 [2024-04-26 09:03:44.301183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.167 [2024-04-26 09:03:44.303658] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.167 [2024-04-26 09:03:44.311516] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.167 [2024-04-26 09:03:44.312219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.312805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.312847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.167 [2024-04-26 09:03:44.312882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.167 [2024-04-26 09:03:44.313122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.167 [2024-04-26 09:03:44.313280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.167 [2024-04-26 09:03:44.313291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.167 [2024-04-26 09:03:44.313300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.167 [2024-04-26 09:03:44.315791] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.167 [2024-04-26 09:03:44.324428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.167 [2024-04-26 09:03:44.325132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.325558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.325572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.167 [2024-04-26 09:03:44.325583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.167 [2024-04-26 09:03:44.325753] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.167 [2024-04-26 09:03:44.325923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.167 [2024-04-26 09:03:44.325935] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.167 [2024-04-26 09:03:44.325945] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.167 [2024-04-26 09:03:44.328607] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.167 [2024-04-26 09:03:44.337401] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.167 [2024-04-26 09:03:44.337974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.338484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.338499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.167 [2024-04-26 09:03:44.338508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.167 [2024-04-26 09:03:44.338684] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.167 [2024-04-26 09:03:44.338849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.167 [2024-04-26 09:03:44.338860] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.167 [2024-04-26 09:03:44.338872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.167 [2024-04-26 09:03:44.341536] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.167 [2024-04-26 09:03:44.350224] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.167 [2024-04-26 09:03:44.350891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.351394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.351406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.167 [2024-04-26 09:03:44.351416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.167 [2024-04-26 09:03:44.351586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.167 [2024-04-26 09:03:44.351751] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.167 [2024-04-26 09:03:44.351762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.167 [2024-04-26 09:03:44.351772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.167 [2024-04-26 09:03:44.354350] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.167 [2024-04-26 09:03:44.363036] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.167 [2024-04-26 09:03:44.363727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.364214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.364227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.167 [2024-04-26 09:03:44.364236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.167 [2024-04-26 09:03:44.364401] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.167 [2024-04-26 09:03:44.364570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.167 [2024-04-26 09:03:44.364581] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.167 [2024-04-26 09:03:44.364590] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.167 [2024-04-26 09:03:44.367172] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.167 [2024-04-26 09:03:44.375857] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.167 [2024-04-26 09:03:44.376572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.377158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.167 [2024-04-26 09:03:44.377199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.167 [2024-04-26 09:03:44.377231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.167 [2024-04-26 09:03:44.377830] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.168 [2024-04-26 09:03:44.378388] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.168 [2024-04-26 09:03:44.378403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.168 [2024-04-26 09:03:44.378416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.168 [2024-04-26 09:03:44.382140] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.168 [2024-04-26 09:03:44.389169] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.168 [2024-04-26 09:03:44.389854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.168 [2024-04-26 09:03:44.390329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.168 [2024-04-26 09:03:44.390370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.168 [2024-04-26 09:03:44.390404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.168 [2024-04-26 09:03:44.391005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.168 [2024-04-26 09:03:44.391348] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.168 [2024-04-26 09:03:44.391359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.168 [2024-04-26 09:03:44.391369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.168 [2024-04-26 09:03:44.394033] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.168 [2024-04-26 09:03:44.402069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.168 [2024-04-26 09:03:44.402784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.168 [2024-04-26 09:03:44.403356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.168 [2024-04-26 09:03:44.403396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.168 [2024-04-26 09:03:44.403429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.168 [2024-04-26 09:03:44.404033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.168 [2024-04-26 09:03:44.404420] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.168 [2024-04-26 09:03:44.404431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.168 [2024-04-26 09:03:44.404440] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.168 [2024-04-26 09:03:44.407068] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.428 [2024-04-26 09:03:44.414939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.428 [2024-04-26 09:03:44.415602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.428 [2024-04-26 09:03:44.416163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.428 [2024-04-26 09:03:44.416204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.428 [2024-04-26 09:03:44.416237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.428 [2024-04-26 09:03:44.416474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.428 [2024-04-26 09:03:44.416642] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.428 [2024-04-26 09:03:44.416654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.428 [2024-04-26 09:03:44.416662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.428 [2024-04-26 09:03:44.419315] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.428 [2024-04-26 09:03:44.427649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.428 [2024-04-26 09:03:44.428309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.428 [2024-04-26 09:03:44.428829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.428 [2024-04-26 09:03:44.428844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.428 [2024-04-26 09:03:44.428854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.428 [2024-04-26 09:03:44.429011] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.428 [2024-04-26 09:03:44.429168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.428 [2024-04-26 09:03:44.429179] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.428 [2024-04-26 09:03:44.429188] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.428 [2024-04-26 09:03:44.431646] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.428 [2024-04-26 09:03:44.440329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.428 [2024-04-26 09:03:44.441027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.428 [2024-04-26 09:03:44.441594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.428 [2024-04-26 09:03:44.441608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.428 [2024-04-26 09:03:44.441618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.428 [2024-04-26 09:03:44.441776] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.428 [2024-04-26 09:03:44.441932] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.428 [2024-04-26 09:03:44.441944] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.428 [2024-04-26 09:03:44.441952] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.428 [2024-04-26 09:03:44.444400] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.428 [2024-04-26 09:03:44.453071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.428 [2024-04-26 09:03:44.453675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.428 [2024-04-26 09:03:44.454163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.428 [2024-04-26 09:03:44.454205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.428 [2024-04-26 09:03:44.454238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.428 [2024-04-26 09:03:44.454836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.428 [2024-04-26 09:03:44.455428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.428 [2024-04-26 09:03:44.455469] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.428 [2024-04-26 09:03:44.455477] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.428 [2024-04-26 09:03:44.457946] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.428 [2024-04-26 09:03:44.465920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.428 [2024-04-26 09:03:44.466635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.428 [2024-04-26 09:03:44.467171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.428 [2024-04-26 09:03:44.467211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.428 [2024-04-26 09:03:44.467245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.428 [2024-04-26 09:03:44.467496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.428 [2024-04-26 09:03:44.467734] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.428 [2024-04-26 09:03:44.467750] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.428 [2024-04-26 09:03:44.467763] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.428 [2024-04-26 09:03:44.471495] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.428 [2024-04-26 09:03:44.479281] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.428 [2024-04-26 09:03:44.479909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.480518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.480561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.429 [2024-04-26 09:03:44.480594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.429 [2024-04-26 09:03:44.481182] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.429 [2024-04-26 09:03:44.481492] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.429 [2024-04-26 09:03:44.481504] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.429 [2024-04-26 09:03:44.481514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.429 [2024-04-26 09:03:44.484031] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.429 [2024-04-26 09:03:44.491974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.429 [2024-04-26 09:03:44.492583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.493083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.493096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.429 [2024-04-26 09:03:44.493106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.429 [2024-04-26 09:03:44.493276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.429 [2024-04-26 09:03:44.493447] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.429 [2024-04-26 09:03:44.493463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.429 [2024-04-26 09:03:44.493473] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.429 [2024-04-26 09:03:44.496187] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.429 [2024-04-26 09:03:44.504880] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.429 [2024-04-26 09:03:44.505548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.506090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.506139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.429 [2024-04-26 09:03:44.506173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.429 [2024-04-26 09:03:44.506576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.429 [2024-04-26 09:03:44.506736] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.429 [2024-04-26 09:03:44.506747] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.429 [2024-04-26 09:03:44.506756] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.429 [2024-04-26 09:03:44.509203] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.429 [2024-04-26 09:03:44.517608] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.429 [2024-04-26 09:03:44.518269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.518837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.518879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.429 [2024-04-26 09:03:44.518913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.429 [2024-04-26 09:03:44.519332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.429 [2024-04-26 09:03:44.519496] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.429 [2024-04-26 09:03:44.519508] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.429 [2024-04-26 09:03:44.519516] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.429 [2024-04-26 09:03:44.521967] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.429 [2024-04-26 09:03:44.530353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.429 [2024-04-26 09:03:44.530971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.531493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.531534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.429 [2024-04-26 09:03:44.531567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.429 [2024-04-26 09:03:44.532154] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.429 [2024-04-26 09:03:44.532597] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.429 [2024-04-26 09:03:44.532609] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.429 [2024-04-26 09:03:44.532617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.429 [2024-04-26 09:03:44.535068] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.429 [2024-04-26 09:03:44.543024] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.429 [2024-04-26 09:03:44.543688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.544133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.544174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.429 [2024-04-26 09:03:44.544214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.429 [2024-04-26 09:03:44.544629] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.429 [2024-04-26 09:03:44.544788] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.429 [2024-04-26 09:03:44.544799] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.429 [2024-04-26 09:03:44.544807] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.429 [2024-04-26 09:03:44.547256] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.429 [2024-04-26 09:03:44.555712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.429 [2024-04-26 09:03:44.556333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.556932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.556974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.429 [2024-04-26 09:03:44.557007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.429 [2024-04-26 09:03:44.557443] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.429 [2024-04-26 09:03:44.557606] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.429 [2024-04-26 09:03:44.557618] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.429 [2024-04-26 09:03:44.557626] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.429 [2024-04-26 09:03:44.560073] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.429 [2024-04-26 09:03:44.568465] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.429 [2024-04-26 09:03:44.569127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.569690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.569734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.429 [2024-04-26 09:03:44.569767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.429 [2024-04-26 09:03:44.570354] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.429 [2024-04-26 09:03:44.570788] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.429 [2024-04-26 09:03:44.570799] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.429 [2024-04-26 09:03:44.570808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.429 [2024-04-26 09:03:44.573252] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.429 [2024-04-26 09:03:44.581202] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.429 [2024-04-26 09:03:44.581820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.582308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.582349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.429 [2024-04-26 09:03:44.582383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.429 [2024-04-26 09:03:44.582766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.429 [2024-04-26 09:03:44.582924] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.429 [2024-04-26 09:03:44.582936] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.429 [2024-04-26 09:03:44.582945] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.429 [2024-04-26 09:03:44.585391] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.429 [2024-04-26 09:03:44.593951] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.429 [2024-04-26 09:03:44.594569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.595059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.429 [2024-04-26 09:03:44.595099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.429 [2024-04-26 09:03:44.595132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.429 [2024-04-26 09:03:44.595549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.430 [2024-04-26 09:03:44.595708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.430 [2024-04-26 09:03:44.595720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.430 [2024-04-26 09:03:44.595728] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.430 [2024-04-26 09:03:44.598179] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.430 [2024-04-26 09:03:44.606766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.430 [2024-04-26 09:03:44.607405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.607891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.607905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.430 [2024-04-26 09:03:44.607915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.430 [2024-04-26 09:03:44.608072] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.430 [2024-04-26 09:03:44.608229] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.430 [2024-04-26 09:03:44.608240] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.430 [2024-04-26 09:03:44.608248] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.430 [2024-04-26 09:03:44.610712] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.430 [2024-04-26 09:03:44.619521] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.430 [2024-04-26 09:03:44.620142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.620643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.620658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.430 [2024-04-26 09:03:44.620668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.430 [2024-04-26 09:03:44.620829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.430 [2024-04-26 09:03:44.620990] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.430 [2024-04-26 09:03:44.621002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.430 [2024-04-26 09:03:44.621011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.430 [2024-04-26 09:03:44.623460] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.430 [2024-04-26 09:03:44.632256] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.430 [2024-04-26 09:03:44.632900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.633391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.633433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.430 [2024-04-26 09:03:44.633480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.430 [2024-04-26 09:03:44.633907] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.430 [2024-04-26 09:03:44.634074] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.430 [2024-04-26 09:03:44.634086] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.430 [2024-04-26 09:03:44.634095] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.430 [2024-04-26 09:03:44.636779] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.430 [2024-04-26 09:03:44.645067] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.430 [2024-04-26 09:03:44.645724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.646202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.646243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.430 [2024-04-26 09:03:44.646277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.430 [2024-04-26 09:03:44.646720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.430 [2024-04-26 09:03:44.646878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.430 [2024-04-26 09:03:44.646889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.430 [2024-04-26 09:03:44.646897] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.430 [2024-04-26 09:03:44.649343] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.430 [2024-04-26 09:03:44.657703] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.430 [2024-04-26 09:03:44.658376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.658891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.658934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.430 [2024-04-26 09:03:44.658967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.430 [2024-04-26 09:03:44.659417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.430 [2024-04-26 09:03:44.659664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.430 [2024-04-26 09:03:44.659684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.430 [2024-04-26 09:03:44.659697] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.430 [2024-04-26 09:03:44.663411] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.430 [2024-04-26 09:03:44.671088] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.430 [2024-04-26 09:03:44.671779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.672286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.430 [2024-04-26 09:03:44.672299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.430 [2024-04-26 09:03:44.672309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.430 [2024-04-26 09:03:44.672485] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.430 [2024-04-26 09:03:44.672656] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.430 [2024-04-26 09:03:44.672668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.430 [2024-04-26 09:03:44.672677] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.690 [2024-04-26 09:03:44.675335] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.690 [2024-04-26 09:03:44.683957] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.690 [2024-04-26 09:03:44.684667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.685245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.685286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.690 [2024-04-26 09:03:44.685319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.690 [2024-04-26 09:03:44.685710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.690 [2024-04-26 09:03:44.685883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.690 [2024-04-26 09:03:44.685894] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.690 [2024-04-26 09:03:44.685903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.690 [2024-04-26 09:03:44.688564] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.690 [2024-04-26 09:03:44.696874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.690 [2024-04-26 09:03:44.697591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.697998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.698012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.690 [2024-04-26 09:03:44.698021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.690 [2024-04-26 09:03:44.698187] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.690 [2024-04-26 09:03:44.698351] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.690 [2024-04-26 09:03:44.698363] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.690 [2024-04-26 09:03:44.698378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.690 [2024-04-26 09:03:44.700964] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.690 [2024-04-26 09:03:44.709673] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.690 [2024-04-26 09:03:44.710291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.710880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.710923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.690 [2024-04-26 09:03:44.710957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.690 [2024-04-26 09:03:44.711554] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.690 [2024-04-26 09:03:44.712008] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.690 [2024-04-26 09:03:44.712020] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.690 [2024-04-26 09:03:44.712029] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.690 [2024-04-26 09:03:44.714620] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.690 [2024-04-26 09:03:44.722389] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.690 [2024-04-26 09:03:44.722977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.723412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.723465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.690 [2024-04-26 09:03:44.723499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.690 [2024-04-26 09:03:44.723871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.690 [2024-04-26 09:03:44.724029] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.690 [2024-04-26 09:03:44.724040] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.690 [2024-04-26 09:03:44.724048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.690 [2024-04-26 09:03:44.726505] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.690 [2024-04-26 09:03:44.735023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.690 [2024-04-26 09:03:44.735715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.736288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.736329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.690 [2024-04-26 09:03:44.736362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.690 [2024-04-26 09:03:44.736849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.690 [2024-04-26 09:03:44.737007] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.690 [2024-04-26 09:03:44.737018] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.690 [2024-04-26 09:03:44.737027] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.690 [2024-04-26 09:03:44.739480] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.690 [2024-04-26 09:03:44.747686] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.690 [2024-04-26 09:03:44.748310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.748799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.748813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.690 [2024-04-26 09:03:44.748822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.690 [2024-04-26 09:03:44.748981] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.690 [2024-04-26 09:03:44.749137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.690 [2024-04-26 09:03:44.749148] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.690 [2024-04-26 09:03:44.749157] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.690 [2024-04-26 09:03:44.751602] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.690 [2024-04-26 09:03:44.760384] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.690 [2024-04-26 09:03:44.761074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.761676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.761718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.690 [2024-04-26 09:03:44.761751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.690 [2024-04-26 09:03:44.762212] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.690 [2024-04-26 09:03:44.762370] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.690 [2024-04-26 09:03:44.762381] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.690 [2024-04-26 09:03:44.762390] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.690 [2024-04-26 09:03:44.764843] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.690 [2024-04-26 09:03:44.773072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.690 [2024-04-26 09:03:44.773734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.774319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.774360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.690 [2024-04-26 09:03:44.774392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.690 [2024-04-26 09:03:44.774993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.690 [2024-04-26 09:03:44.775252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.690 [2024-04-26 09:03:44.775263] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.690 [2024-04-26 09:03:44.775271] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.690 [2024-04-26 09:03:44.777725] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.690 [2024-04-26 09:03:44.785814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.690 [2024-04-26 09:03:44.786434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.786881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.690 [2024-04-26 09:03:44.786922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.690 [2024-04-26 09:03:44.786959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.690 [2024-04-26 09:03:44.787116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.690 [2024-04-26 09:03:44.787274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.691 [2024-04-26 09:03:44.787285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.691 [2024-04-26 09:03:44.787294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.691 [2024-04-26 09:03:44.789746] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.691 [2024-04-26 09:03:44.798556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.691 [2024-04-26 09:03:44.799238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.799747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.799789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.691 [2024-04-26 09:03:44.799822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.691 [2024-04-26 09:03:44.800227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.691 [2024-04-26 09:03:44.800386] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.691 [2024-04-26 09:03:44.800397] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.691 [2024-04-26 09:03:44.800406] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.691 [2024-04-26 09:03:44.802859] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.691 [2024-04-26 09:03:44.811332] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.691 [2024-04-26 09:03:44.811976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.812574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.812615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.691 [2024-04-26 09:03:44.812648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.691 [2024-04-26 09:03:44.813058] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.691 [2024-04-26 09:03:44.813216] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.691 [2024-04-26 09:03:44.813227] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.691 [2024-04-26 09:03:44.813235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.691 [2024-04-26 09:03:44.815768] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.691 [2024-04-26 09:03:44.824006] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.691 [2024-04-26 09:03:44.824565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.825137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.825178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.691 [2024-04-26 09:03:44.825212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.691 [2024-04-26 09:03:44.825506] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.691 [2024-04-26 09:03:44.825665] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.691 [2024-04-26 09:03:44.825677] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.691 [2024-04-26 09:03:44.825686] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.691 [2024-04-26 09:03:44.828134] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.691 [2024-04-26 09:03:44.836656] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.691 [2024-04-26 09:03:44.837262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.837748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.837793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.691 [2024-04-26 09:03:44.837828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.691 [2024-04-26 09:03:44.838266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.691 [2024-04-26 09:03:44.838423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.691 [2024-04-26 09:03:44.838434] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.691 [2024-04-26 09:03:44.838442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.691 [2024-04-26 09:03:44.840901] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.691 [2024-04-26 09:03:44.849425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.691 [2024-04-26 09:03:44.850056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.850494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.850536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.691 [2024-04-26 09:03:44.850570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.691 [2024-04-26 09:03:44.851159] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.691 [2024-04-26 09:03:44.851618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.691 [2024-04-26 09:03:44.851633] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.691 [2024-04-26 09:03:44.851646] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.691 [2024-04-26 09:03:44.855367] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.691 [2024-04-26 09:03:44.862868] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.691 [2024-04-26 09:03:44.863481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.863972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.864021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.691 [2024-04-26 09:03:44.864055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.691 [2024-04-26 09:03:44.864563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.691 [2024-04-26 09:03:44.864722] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.691 [2024-04-26 09:03:44.864733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.691 [2024-04-26 09:03:44.864742] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.691 [2024-04-26 09:03:44.867191] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.691 [2024-04-26 09:03:44.875513] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.691 [2024-04-26 09:03:44.876112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.876673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.876718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.691 [2024-04-26 09:03:44.876751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.691 [2024-04-26 09:03:44.877340] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.691 [2024-04-26 09:03:44.877635] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.691 [2024-04-26 09:03:44.877647] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.691 [2024-04-26 09:03:44.877656] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.691 [2024-04-26 09:03:44.880103] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.691 [2024-04-26 09:03:44.888194] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.691 [2024-04-26 09:03:44.888908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.889403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.691 [2024-04-26 09:03:44.889444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.691 [2024-04-26 09:03:44.889487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.691 [2024-04-26 09:03:44.890078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.692 [2024-04-26 09:03:44.890507] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.692 [2024-04-26 09:03:44.890519] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.692 [2024-04-26 09:03:44.890529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.692 [2024-04-26 09:03:44.893206] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.692 [2024-04-26 09:03:44.901076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.692 [2024-04-26 09:03:44.901695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.692 [2024-04-26 09:03:44.902170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.692 [2024-04-26 09:03:44.902184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.692 [2024-04-26 09:03:44.902196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.692 [2024-04-26 09:03:44.902366] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.692 [2024-04-26 09:03:44.902540] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.692 [2024-04-26 09:03:44.902552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.692 [2024-04-26 09:03:44.902561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.692 [2024-04-26 09:03:44.905216] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.692 [2024-04-26 09:03:44.913997] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.692 [2024-04-26 09:03:44.914685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.692 [2024-04-26 09:03:44.915137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.692 [2024-04-26 09:03:44.915151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.692 [2024-04-26 09:03:44.915161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.692 [2024-04-26 09:03:44.915331] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.692 [2024-04-26 09:03:44.915506] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.692 [2024-04-26 09:03:44.915518] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.692 [2024-04-26 09:03:44.915527] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.692 [2024-04-26 09:03:44.918179] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.692 [2024-04-26 09:03:44.926945] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.692 [2024-04-26 09:03:44.927563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.692 [2024-04-26 09:03:44.928060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.692 [2024-04-26 09:03:44.928074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.692 [2024-04-26 09:03:44.928084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.692 [2024-04-26 09:03:44.928254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.692 [2024-04-26 09:03:44.928424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.692 [2024-04-26 09:03:44.928436] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.692 [2024-04-26 09:03:44.928445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.692 [2024-04-26 09:03:44.931103] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:44.939875] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:44.940553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:44.941058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:44.941071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:44.941081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:44.941256] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:44.941427] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.953 [2024-04-26 09:03:44.941440] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.953 [2024-04-26 09:03:44.941453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.953 [2024-04-26 09:03:44.944117] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:44.952883] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:44.953556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:44.953981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:44.953995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:44.954006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:44.954176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:44.954346] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.953 [2024-04-26 09:03:44.954357] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.953 [2024-04-26 09:03:44.954366] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.953 [2024-04-26 09:03:44.957036] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:44.965810] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:44.966493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:44.966856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:44.966869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:44.966879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:44.967050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:44.967219] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.953 [2024-04-26 09:03:44.967231] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.953 [2024-04-26 09:03:44.967240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.953 [2024-04-26 09:03:44.969910] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:44.978720] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:44.979395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:44.979823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:44.979838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:44.979848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:44.980018] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:44.980192] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.953 [2024-04-26 09:03:44.980205] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.953 [2024-04-26 09:03:44.980214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.953 [2024-04-26 09:03:44.982881] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:44.991665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:44.992356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:44.992830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:44.992844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:44.992854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:44.993025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:44.993196] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.953 [2024-04-26 09:03:44.993208] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.953 [2024-04-26 09:03:44.993217] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.953 [2024-04-26 09:03:44.995876] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:45.004670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:45.005292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.005741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.005755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:45.005765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:45.005934] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:45.006104] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.953 [2024-04-26 09:03:45.006115] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.953 [2024-04-26 09:03:45.006125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.953 [2024-04-26 09:03:45.008786] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:45.017568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:45.018234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.018756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.018770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:45.018780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:45.018950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:45.019121] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.953 [2024-04-26 09:03:45.019136] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.953 [2024-04-26 09:03:45.019145] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.953 [2024-04-26 09:03:45.021805] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:45.030442] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:45.031037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.031542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.031584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:45.031619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:45.031934] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:45.032105] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.953 [2024-04-26 09:03:45.032117] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.953 [2024-04-26 09:03:45.032126] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.953 [2024-04-26 09:03:45.034787] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:45.043436] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:45.044148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.044705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.044746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:45.044779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:45.045065] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:45.045237] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.953 [2024-04-26 09:03:45.045249] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.953 [2024-04-26 09:03:45.045258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.953 [2024-04-26 09:03:45.047922] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:45.056274] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:45.056974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.057478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.057520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:45.057557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:45.057721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:45.057888] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.953 [2024-04-26 09:03:45.057899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.953 [2024-04-26 09:03:45.057911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.953 [2024-04-26 09:03:45.060499] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:45.068940] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:45.069609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.070170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.070210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:45.070243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:45.070844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:45.071151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.953 [2024-04-26 09:03:45.071162] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.953 [2024-04-26 09:03:45.071171] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.953 [2024-04-26 09:03:45.073620] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.953 [2024-04-26 09:03:45.081640] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.953 [2024-04-26 09:03:45.082315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.082863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.953 [2024-04-26 09:03:45.082900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.953 [2024-04-26 09:03:45.082910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.953 [2024-04-26 09:03:45.083068] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.953 [2024-04-26 09:03:45.083226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.954 [2024-04-26 09:03:45.083237] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.954 [2024-04-26 09:03:45.083247] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.954 [2024-04-26 09:03:45.085699] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.954 [2024-04-26 09:03:45.094361] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.954 [2024-04-26 09:03:45.095393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.095979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.096021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.954 [2024-04-26 09:03:45.096054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.954 [2024-04-26 09:03:45.096653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.954 [2024-04-26 09:03:45.096961] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.954 [2024-04-26 09:03:45.096972] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.954 [2024-04-26 09:03:45.096981] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.954 [2024-04-26 09:03:45.099433] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.954 [2024-04-26 09:03:45.107131] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.954 [2024-04-26 09:03:45.107580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.108143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.108183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.954 [2024-04-26 09:03:45.108215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.954 [2024-04-26 09:03:45.108812] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.954 [2024-04-26 09:03:45.109182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.954 [2024-04-26 09:03:45.109193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.954 [2024-04-26 09:03:45.109202] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.954 [2024-04-26 09:03:45.111664] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.954 [2024-04-26 09:03:45.119900] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.954 [2024-04-26 09:03:45.120591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.121041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.121082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.954 [2024-04-26 09:03:45.121116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.954 [2024-04-26 09:03:45.121720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.954 [2024-04-26 09:03:45.122068] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.954 [2024-04-26 09:03:45.122079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.954 [2024-04-26 09:03:45.122088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.954 [2024-04-26 09:03:45.124618] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.954 [2024-04-26 09:03:45.132643] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.954 [2024-04-26 09:03:45.133331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.133815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.133859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.954 [2024-04-26 09:03:45.133892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.954 [2024-04-26 09:03:45.134246] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.954 [2024-04-26 09:03:45.134405] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.954 [2024-04-26 09:03:45.134416] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.954 [2024-04-26 09:03:45.134424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.954 [2024-04-26 09:03:45.136875] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.954 [2024-04-26 09:03:45.145384] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.954 [2024-04-26 09:03:45.146105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.146593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.146636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.954 [2024-04-26 09:03:45.146668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.954 [2024-04-26 09:03:45.147102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.954 [2024-04-26 09:03:45.147260] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.954 [2024-04-26 09:03:45.147271] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.954 [2024-04-26 09:03:45.147279] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.954 [2024-04-26 09:03:45.149980] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.954 [2024-04-26 09:03:45.158233] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.954 [2024-04-26 09:03:45.158916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.159336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.159376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.954 [2024-04-26 09:03:45.159408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.954 [2024-04-26 09:03:45.160012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.954 [2024-04-26 09:03:45.160317] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.954 [2024-04-26 09:03:45.160328] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.954 [2024-04-26 09:03:45.160337] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.954 [2024-04-26 09:03:45.162845] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.954 [2024-04-26 09:03:45.170931] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.954 [2024-04-26 09:03:45.171364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.171821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.171862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.954 [2024-04-26 09:03:45.171894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.954 [2024-04-26 09:03:45.172210] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.954 [2024-04-26 09:03:45.172369] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.954 [2024-04-26 09:03:45.172380] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.954 [2024-04-26 09:03:45.172405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.954 [2024-04-26 09:03:45.175036] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.954 [2024-04-26 09:03:45.183669] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.954 [2024-04-26 09:03:45.184354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.184922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.954 [2024-04-26 09:03:45.184964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:27.954 [2024-04-26 09:03:45.184997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:27.954 [2024-04-26 09:03:45.185237] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:27.954 [2024-04-26 09:03:45.185395] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.954 [2024-04-26 09:03:45.185406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.954 [2024-04-26 09:03:45.185414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.954 [2024-04-26 09:03:45.187861] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.954 [2024-04-26 09:03:45.196497] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.233 [2024-04-26 09:03:45.197188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.197667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.197682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.233 [2024-04-26 09:03:45.197692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.233 [2024-04-26 09:03:45.197861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.233 [2024-04-26 09:03:45.198032] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.233 [2024-04-26 09:03:45.198043] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.233 [2024-04-26 09:03:45.198053] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.233 [2024-04-26 09:03:45.200725] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.233 [2024-04-26 09:03:45.209344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.233 [2024-04-26 09:03:45.210025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.210470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.210484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.233 [2024-04-26 09:03:45.210494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.233 [2024-04-26 09:03:45.210659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.233 [2024-04-26 09:03:45.210824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.233 [2024-04-26 09:03:45.210836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.233 [2024-04-26 09:03:45.210845] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.233 [2024-04-26 09:03:45.213436] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.233 [2024-04-26 09:03:45.222034] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.233 [2024-04-26 09:03:45.222724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.223224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.223272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.233 [2024-04-26 09:03:45.223306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.233 [2024-04-26 09:03:45.223905] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.233 [2024-04-26 09:03:45.224435] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.233 [2024-04-26 09:03:45.224446] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.233 [2024-04-26 09:03:45.224458] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.233 [2024-04-26 09:03:45.226902] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.233 [2024-04-26 09:03:45.234783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.233 [2024-04-26 09:03:45.235475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.235927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.235968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.233 [2024-04-26 09:03:45.236000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.233 [2024-04-26 09:03:45.236403] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.233 [2024-04-26 09:03:45.236654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.233 [2024-04-26 09:03:45.236670] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.233 [2024-04-26 09:03:45.236683] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.233 [2024-04-26 09:03:45.240397] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.233 [2024-04-26 09:03:45.248049] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.233 [2024-04-26 09:03:45.248729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.249209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.249250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.233 [2024-04-26 09:03:45.249284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.233 [2024-04-26 09:03:45.249885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.233 [2024-04-26 09:03:45.250352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.233 [2024-04-26 09:03:45.250363] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.233 [2024-04-26 09:03:45.250372] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.233 [2024-04-26 09:03:45.252824] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.233 [2024-04-26 09:03:45.260765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.233 [2024-04-26 09:03:45.261202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.261712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.261754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.233 [2024-04-26 09:03:45.261793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.233 [2024-04-26 09:03:45.262381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.233 [2024-04-26 09:03:45.262559] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.233 [2024-04-26 09:03:45.262570] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.233 [2024-04-26 09:03:45.262579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.233 [2024-04-26 09:03:45.265025] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.233 [2024-04-26 09:03:45.273533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.233 [2024-04-26 09:03:45.274224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.274736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.274777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.233 [2024-04-26 09:03:45.274810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.233 [2024-04-26 09:03:45.275183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.233 [2024-04-26 09:03:45.275342] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.233 [2024-04-26 09:03:45.275353] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.233 [2024-04-26 09:03:45.275361] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.233 [2024-04-26 09:03:45.277811] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.233 [2024-04-26 09:03:45.286185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.233 [2024-04-26 09:03:45.286855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.287416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.287458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.233 [2024-04-26 09:03:45.287467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.233 [2024-04-26 09:03:45.287659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.233 [2024-04-26 09:03:45.287817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.233 [2024-04-26 09:03:45.287828] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.233 [2024-04-26 09:03:45.287837] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.233 [2024-04-26 09:03:45.290289] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.233 [2024-04-26 09:03:45.298947] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.233 [2024-04-26 09:03:45.299635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.300196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-04-26 09:03:45.300236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.233 [2024-04-26 09:03:45.300268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.233 [2024-04-26 09:03:45.300886] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.234 [2024-04-26 09:03:45.301423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-04-26 09:03:45.301434] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-04-26 09:03:45.301442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-04-26 09:03:45.303971] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-04-26 09:03:45.311627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-04-26 09:03:45.312315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.312874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.312917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-04-26 09:03:45.312949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.234 [2024-04-26 09:03:45.313287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.234 [2024-04-26 09:03:45.313445] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-04-26 09:03:45.313460] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-04-26 09:03:45.313469] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-04-26 09:03:45.315912] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-04-26 09:03:45.324280] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-04-26 09:03:45.324976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.325502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.325545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-04-26 09:03:45.325577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.234 [2024-04-26 09:03:45.326165] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.234 [2024-04-26 09:03:45.326610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-04-26 09:03:45.326621] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-04-26 09:03:45.326629] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-04-26 09:03:45.329075] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-04-26 09:03:45.337104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-04-26 09:03:45.337780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.338344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.338384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-04-26 09:03:45.338417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.234 [2024-04-26 09:03:45.338787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.234 [2024-04-26 09:03:45.338948] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-04-26 09:03:45.338960] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-04-26 09:03:45.338969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-04-26 09:03:45.341415] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-04-26 09:03:45.349793] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-04-26 09:03:45.350399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.350900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.350942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-04-26 09:03:45.350974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.234 [2024-04-26 09:03:45.351411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.234 [2024-04-26 09:03:45.351573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-04-26 09:03:45.351585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-04-26 09:03:45.351593] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-04-26 09:03:45.354046] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-04-26 09:03:45.362559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-04-26 09:03:45.363114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.363607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.363621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-04-26 09:03:45.363631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.234 [2024-04-26 09:03:45.363788] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.234 [2024-04-26 09:03:45.363945] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-04-26 09:03:45.363956] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-04-26 09:03:45.363964] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-04-26 09:03:45.366414] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-04-26 09:03:45.375213] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-04-26 09:03:45.375885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.376254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.376267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-04-26 09:03:45.376276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.234 [2024-04-26 09:03:45.376432] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.234 [2024-04-26 09:03:45.376595] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-04-26 09:03:45.376609] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-04-26 09:03:45.376618] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-04-26 09:03:45.379065] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-04-26 09:03:45.387864] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-04-26 09:03:45.388559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.388773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.388787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-04-26 09:03:45.388796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.234 [2024-04-26 09:03:45.388962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.234 [2024-04-26 09:03:45.389128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-04-26 09:03:45.389139] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-04-26 09:03:45.389148] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-04-26 09:03:45.391821] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-04-26 09:03:45.400762] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-04-26 09:03:45.401429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.401939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.401953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-04-26 09:03:45.401963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.234 [2024-04-26 09:03:45.402132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.234 [2024-04-26 09:03:45.402302] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-04-26 09:03:45.402314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-04-26 09:03:45.402323] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.234 [2024-04-26 09:03:45.404986] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.234 [2024-04-26 09:03:45.413800] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.234 [2024-04-26 09:03:45.414471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.414963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-04-26 09:03:45.414977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.234 [2024-04-26 09:03:45.414987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.234 [2024-04-26 09:03:45.415157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.234 [2024-04-26 09:03:45.415326] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.234 [2024-04-26 09:03:45.415338] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.234 [2024-04-26 09:03:45.415350] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-04-26 09:03:45.417949] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-04-26 09:03:45.426737] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.235 [2024-04-26 09:03:45.427424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-04-26 09:03:45.427669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-04-26 09:03:45.427683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.235 [2024-04-26 09:03:45.427693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.235 [2024-04-26 09:03:45.427859] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.235 [2024-04-26 09:03:45.428023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.235 [2024-04-26 09:03:45.428035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.235 [2024-04-26 09:03:45.428043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-04-26 09:03:45.430633] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-04-26 09:03:45.439634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.235 [2024-04-26 09:03:45.440271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-04-26 09:03:45.440833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-04-26 09:03:45.440876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.235 [2024-04-26 09:03:45.440909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.235 [2024-04-26 09:03:45.441507] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.235 [2024-04-26 09:03:45.442023] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.235 [2024-04-26 09:03:45.442035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.235 [2024-04-26 09:03:45.442044] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-04-26 09:03:45.444634] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-04-26 09:03:45.452546] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.235 [2024-04-26 09:03:45.453246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-04-26 09:03:45.453801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-04-26 09:03:45.453843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.235 [2024-04-26 09:03:45.453877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.235 [2024-04-26 09:03:45.454474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.235 [2024-04-26 09:03:45.454816] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.235 [2024-04-26 09:03:45.454828] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.235 [2024-04-26 09:03:45.454837] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-04-26 09:03:45.457423] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-04-26 09:03:45.465455] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.235 [2024-04-26 09:03:45.466148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-04-26 09:03:45.466652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-04-26 09:03:45.466666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.235 [2024-04-26 09:03:45.466677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.235 [2024-04-26 09:03:45.466847] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.235 [2024-04-26 09:03:45.467017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.235 [2024-04-26 09:03:45.467029] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.235 [2024-04-26 09:03:45.467037] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.235 [2024-04-26 09:03:45.469700] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.235 [2024-04-26 09:03:45.478329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.479026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.479527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.479541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.496 [2024-04-26 09:03:45.479551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.496 [2024-04-26 09:03:45.479721] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.496 [2024-04-26 09:03:45.479892] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.496 [2024-04-26 09:03:45.479903] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.496 [2024-04-26 09:03:45.479912] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.496 [2024-04-26 09:03:45.482574] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.496 [2024-04-26 09:03:45.491227] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.491945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.492417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.492430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.496 [2024-04-26 09:03:45.492440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.496 [2024-04-26 09:03:45.492614] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.496 [2024-04-26 09:03:45.492785] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.496 [2024-04-26 09:03:45.492796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.496 [2024-04-26 09:03:45.492805] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.496 [2024-04-26 09:03:45.495465] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.496 [2024-04-26 09:03:45.504056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.504775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.505251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.505275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.496 [2024-04-26 09:03:45.505285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.496 [2024-04-26 09:03:45.505454] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.496 [2024-04-26 09:03:45.505619] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.496 [2024-04-26 09:03:45.505631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.496 [2024-04-26 09:03:45.505639] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.496 [2024-04-26 09:03:45.508215] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.496 [2024-04-26 09:03:45.516899] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.517587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.518083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.518096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.496 [2024-04-26 09:03:45.518106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.496 [2024-04-26 09:03:45.518270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.496 [2024-04-26 09:03:45.518436] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.496 [2024-04-26 09:03:45.518447] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.496 [2024-04-26 09:03:45.518462] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.496 [2024-04-26 09:03:45.520971] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.496 [2024-04-26 09:03:45.529715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.530248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.530746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.530760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.496 [2024-04-26 09:03:45.530769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.496 [2024-04-26 09:03:45.530934] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.496 [2024-04-26 09:03:45.531120] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.496 [2024-04-26 09:03:45.531131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.496 [2024-04-26 09:03:45.531140] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.496 [2024-04-26 09:03:45.533806] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.496 [2024-04-26 09:03:45.542579] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.543265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.543836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.543879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.496 [2024-04-26 09:03:45.543912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.496 [2024-04-26 09:03:45.544511] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.496 [2024-04-26 09:03:45.545005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.496 [2024-04-26 09:03:45.545016] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.496 [2024-04-26 09:03:45.545025] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.496 [2024-04-26 09:03:45.547687] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.496 [2024-04-26 09:03:45.555402] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.556122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.556629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.556672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.496 [2024-04-26 09:03:45.556705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.496 [2024-04-26 09:03:45.557094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.496 [2024-04-26 09:03:45.557252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.496 [2024-04-26 09:03:45.557263] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.496 [2024-04-26 09:03:45.557273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.496 [2024-04-26 09:03:45.559800] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.496 [2024-04-26 09:03:45.568170] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.568866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.569419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.569472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.496 [2024-04-26 09:03:45.569505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.496 [2024-04-26 09:03:45.569935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.496 [2024-04-26 09:03:45.570094] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.496 [2024-04-26 09:03:45.570105] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.496 [2024-04-26 09:03:45.570114] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.496 [2024-04-26 09:03:45.573600] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.496 [2024-04-26 09:03:45.581620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.582235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.582664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.582715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.496 [2024-04-26 09:03:45.582749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.496 [2024-04-26 09:03:45.583005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.496 [2024-04-26 09:03:45.583163] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.496 [2024-04-26 09:03:45.583174] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.496 [2024-04-26 09:03:45.583182] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.496 [2024-04-26 09:03:45.585716] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.496 [2024-04-26 09:03:45.594374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.594959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.595516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.595559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.496 [2024-04-26 09:03:45.595591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.496 [2024-04-26 09:03:45.596125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.496 [2024-04-26 09:03:45.596283] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.496 [2024-04-26 09:03:45.596294] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.496 [2024-04-26 09:03:45.596302] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.496 [2024-04-26 09:03:45.598752] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.496 [2024-04-26 09:03:45.607129] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.607820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.608334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.608374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.496 [2024-04-26 09:03:45.608408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.496 [2024-04-26 09:03:45.608953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.496 [2024-04-26 09:03:45.609120] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.496 [2024-04-26 09:03:45.609131] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.496 [2024-04-26 09:03:45.609140] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.496 [2024-04-26 09:03:45.611632] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.496 [2024-04-26 09:03:45.619779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.496 [2024-04-26 09:03:45.620409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.496 [2024-04-26 09:03:45.620904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.620945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.497 [2024-04-26 09:03:45.620987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.497 [2024-04-26 09:03:45.621433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.497 [2024-04-26 09:03:45.621678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.497 [2024-04-26 09:03:45.621693] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.497 [2024-04-26 09:03:45.621706] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.497 [2024-04-26 09:03:45.625427] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.497 [2024-04-26 09:03:45.632843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.497 [2024-04-26 09:03:45.633534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.634038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.634078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.497 [2024-04-26 09:03:45.634110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.497 [2024-04-26 09:03:45.634408] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.497 [2024-04-26 09:03:45.634576] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.497 [2024-04-26 09:03:45.634587] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.497 [2024-04-26 09:03:45.634595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.497 [2024-04-26 09:03:45.637111] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.497 [2024-04-26 09:03:45.645604] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.497 [2024-04-26 09:03:45.646299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.646888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.646931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.497 [2024-04-26 09:03:45.646964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.497 [2024-04-26 09:03:45.647347] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.497 [2024-04-26 09:03:45.647517] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.497 [2024-04-26 09:03:45.647529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.497 [2024-04-26 09:03:45.647538] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.497 [2024-04-26 09:03:45.649997] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.497 [2024-04-26 09:03:45.658368] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.497 [2024-04-26 09:03:45.659050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.659592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.659635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.497 [2024-04-26 09:03:45.659668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.497 [2024-04-26 09:03:45.660053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.497 [2024-04-26 09:03:45.660228] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.497 [2024-04-26 09:03:45.660239] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.497 [2024-04-26 09:03:45.660249] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.497 [2024-04-26 09:03:45.662905] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.497 [2024-04-26 09:03:45.671261] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.497 [2024-04-26 09:03:45.671878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.672361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.672402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.497 [2024-04-26 09:03:45.672435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.497 [2024-04-26 09:03:45.673041] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.497 [2024-04-26 09:03:45.673200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.497 [2024-04-26 09:03:45.673211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.497 [2024-04-26 09:03:45.673219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.497 [2024-04-26 09:03:45.675813] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.497 [2024-04-26 09:03:45.684134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.497 [2024-04-26 09:03:45.684833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.685310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.685351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.497 [2024-04-26 09:03:45.685384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.497 [2024-04-26 09:03:45.685983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.497 [2024-04-26 09:03:45.686411] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.497 [2024-04-26 09:03:45.686422] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.497 [2024-04-26 09:03:45.686432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.497 [2024-04-26 09:03:45.688948] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.497 [2024-04-26 09:03:45.696874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.497 [2024-04-26 09:03:45.697566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.698063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.698103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.497 [2024-04-26 09:03:45.698136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.497 [2024-04-26 09:03:45.698544] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.497 [2024-04-26 09:03:45.698705] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.497 [2024-04-26 09:03:45.698716] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.497 [2024-04-26 09:03:45.698725] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.497 [2024-04-26 09:03:45.701172] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.497 [2024-04-26 09:03:45.709625] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.497 [2024-04-26 09:03:45.710302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.710735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.710778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.497 [2024-04-26 09:03:45.710811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.497 [2024-04-26 09:03:45.711214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.497 [2024-04-26 09:03:45.711372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.497 [2024-04-26 09:03:45.711384] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.497 [2024-04-26 09:03:45.711392] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.497 [2024-04-26 09:03:45.713851] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.497 [2024-04-26 09:03:45.722375] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.497 [2024-04-26 09:03:45.723070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.723632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.723674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.497 [2024-04-26 09:03:45.723708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.497 [2024-04-26 09:03:45.724295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.497 [2024-04-26 09:03:45.724484] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.497 [2024-04-26 09:03:45.724495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.497 [2024-04-26 09:03:45.724503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.497 [2024-04-26 09:03:45.726950] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.497 [2024-04-26 09:03:45.735044] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.497 [2024-04-26 09:03:45.735728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.736228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.497 [2024-04-26 09:03:45.736269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.497 [2024-04-26 09:03:45.736301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.497 [2024-04-26 09:03:45.736809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.498 [2024-04-26 09:03:45.736980] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.498 [2024-04-26 09:03:45.736995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.498 [2024-04-26 09:03:45.737005] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.498 [2024-04-26 09:03:45.739673] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.756 [2024-04-26 09:03:45.747987] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.756 [2024-04-26 09:03:45.748628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-04-26 09:03:45.749189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.756 [2024-04-26 09:03:45.749229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.756 [2024-04-26 09:03:45.749263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.756 [2024-04-26 09:03:45.749626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.756 [2024-04-26 09:03:45.749784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.756 [2024-04-26 09:03:45.749795] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.756 [2024-04-26 09:03:45.749803] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.756 [2024-04-26 09:03:45.752337] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.756 [2024-04-26 09:03:45.760746] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.756 [2024-04-26 09:03:45.761440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.761928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.761968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.757 [2024-04-26 09:03:45.762001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.757 [2024-04-26 09:03:45.762542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.757 [2024-04-26 09:03:45.762781] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.757 [2024-04-26 09:03:45.762796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.757 [2024-04-26 09:03:45.762809] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.757 [2024-04-26 09:03:45.766528] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.757 [2024-04-26 09:03:45.774304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.757 [2024-04-26 09:03:45.774974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.775512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.775554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.757 [2024-04-26 09:03:45.775587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.757 [2024-04-26 09:03:45.776173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.757 [2024-04-26 09:03:45.776611] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.757 [2024-04-26 09:03:45.776622] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.757 [2024-04-26 09:03:45.776634] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.757 [2024-04-26 09:03:45.779079] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.757 [2024-04-26 09:03:45.787015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.757 [2024-04-26 09:03:45.787706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.788201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.788242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.757 [2024-04-26 09:03:45.788274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.757 [2024-04-26 09:03:45.788651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.757 [2024-04-26 09:03:45.788810] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.757 [2024-04-26 09:03:45.788821] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.757 [2024-04-26 09:03:45.788829] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.757 [2024-04-26 09:03:45.791279] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.757 [2024-04-26 09:03:45.799658] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.757 [2024-04-26 09:03:45.800339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.800656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.800698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.757 [2024-04-26 09:03:45.800731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.757 [2024-04-26 09:03:45.801318] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.757 [2024-04-26 09:03:45.801861] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.757 [2024-04-26 09:03:45.801871] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.757 [2024-04-26 09:03:45.801880] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.757 [2024-04-26 09:03:45.804341] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.757 [2024-04-26 09:03:45.812422] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.757 [2024-04-26 09:03:45.813115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.813676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.813717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.757 [2024-04-26 09:03:45.813750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.757 [2024-04-26 09:03:45.814337] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.757 [2024-04-26 09:03:45.814842] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.757 [2024-04-26 09:03:45.814853] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.757 [2024-04-26 09:03:45.814861] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.757 [2024-04-26 09:03:45.817320] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.757 [2024-04-26 09:03:45.825109] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.757 [2024-04-26 09:03:45.825800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.826362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.826402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.757 [2024-04-26 09:03:45.826435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.757 [2024-04-26 09:03:45.826923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.757 [2024-04-26 09:03:45.827080] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.757 [2024-04-26 09:03:45.827091] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.757 [2024-04-26 09:03:45.827099] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.757 [2024-04-26 09:03:45.829549] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.757 [2024-04-26 09:03:45.837767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.757 [2024-04-26 09:03:45.838435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.838984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.839024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.757 [2024-04-26 09:03:45.839057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.757 [2024-04-26 09:03:45.839515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.757 [2024-04-26 09:03:45.839672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.757 [2024-04-26 09:03:45.839684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.757 [2024-04-26 09:03:45.839692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.757 [2024-04-26 09:03:45.842133] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.757 [2024-04-26 09:03:45.850445] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.757 [2024-04-26 09:03:45.851138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.851674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.757 [2024-04-26 09:03:45.851716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.758 [2024-04-26 09:03:45.851749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.758 [2024-04-26 09:03:45.852015] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.758 [2024-04-26 09:03:45.852173] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.758 [2024-04-26 09:03:45.852184] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.758 [2024-04-26 09:03:45.852192] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.758 [2024-04-26 09:03:45.854643] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.758 [2024-04-26 09:03:45.863175] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.758 [2024-04-26 09:03:45.863865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.864427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.864480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.758 [2024-04-26 09:03:45.864514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.758 [2024-04-26 09:03:45.865055] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.758 [2024-04-26 09:03:45.865213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.758 [2024-04-26 09:03:45.865224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.758 [2024-04-26 09:03:45.865233] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.758 [2024-04-26 09:03:45.867684] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.758 [2024-04-26 09:03:45.875930] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.758 [2024-04-26 09:03:45.876547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.877101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.877142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.758 [2024-04-26 09:03:45.877175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.758 [2024-04-26 09:03:45.877609] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.758 [2024-04-26 09:03:45.877768] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.758 [2024-04-26 09:03:45.877779] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.758 [2024-04-26 09:03:45.877788] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.758 [2024-04-26 09:03:45.880321] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.758 [2024-04-26 09:03:45.888688] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.758 [2024-04-26 09:03:45.889355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.889912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.889954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.758 [2024-04-26 09:03:45.889987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.758 [2024-04-26 09:03:45.890586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.758 [2024-04-26 09:03:45.890898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.758 [2024-04-26 09:03:45.890910] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.758 [2024-04-26 09:03:45.890918] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.758 [2024-04-26 09:03:45.893366] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.758 [2024-04-26 09:03:45.901436] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.758 [2024-04-26 09:03:45.902133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.902617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.902659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.758 [2024-04-26 09:03:45.902691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.758 [2024-04-26 09:03:45.903112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.758 [2024-04-26 09:03:45.903270] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.758 [2024-04-26 09:03:45.903281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.758 [2024-04-26 09:03:45.903289] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.758 [2024-04-26 09:03:45.905741] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.758 [2024-04-26 09:03:45.914210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.758 [2024-04-26 09:03:45.914914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.915465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.915507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.758 [2024-04-26 09:03:45.915540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.758 [2024-04-26 09:03:45.915915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.758 [2024-04-26 09:03:45.916081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.758 [2024-04-26 09:03:45.916092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.758 [2024-04-26 09:03:45.916101] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.758 [2024-04-26 09:03:45.918771] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.758 [2024-04-26 09:03:45.927159] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.758 [2024-04-26 09:03:45.927779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.928266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.758 [2024-04-26 09:03:45.928306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.758 [2024-04-26 09:03:45.928338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.758 [2024-04-26 09:03:45.928759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.758 [2024-04-26 09:03:45.928918] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.759 [2024-04-26 09:03:45.928928] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.759 [2024-04-26 09:03:45.928937] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.759 [2024-04-26 09:03:45.931383] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.759 [2024-04-26 09:03:45.939890] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.759 [2024-04-26 09:03:45.940561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-04-26 09:03:45.941100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-04-26 09:03:45.941147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.759 [2024-04-26 09:03:45.941180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.759 [2024-04-26 09:03:45.941782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.759 [2024-04-26 09:03:45.942014] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.759 [2024-04-26 09:03:45.942025] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.759 [2024-04-26 09:03:45.942033] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.759 [2024-04-26 09:03:45.944481] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.759 [2024-04-26 09:03:45.952553] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.759 [2024-04-26 09:03:45.953215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-04-26 09:03:45.953698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-04-26 09:03:45.953741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.759 [2024-04-26 09:03:45.953774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.759 [2024-04-26 09:03:45.954307] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.759 [2024-04-26 09:03:45.954550] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.759 [2024-04-26 09:03:45.954566] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.759 [2024-04-26 09:03:45.954579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.759 [2024-04-26 09:03:45.958297] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.759 [2024-04-26 09:03:45.966007] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.759 [2024-04-26 09:03:45.966705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-04-26 09:03:45.967272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-04-26 09:03:45.967313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.759 [2024-04-26 09:03:45.967345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.759 [2024-04-26 09:03:45.967836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.759 [2024-04-26 09:03:45.967994] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.759 [2024-04-26 09:03:45.968005] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.759 [2024-04-26 09:03:45.968015] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.759 [2024-04-26 09:03:45.970543] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.759 [2024-04-26 09:03:45.978762] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.759 [2024-04-26 09:03:45.979438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-04-26 09:03:45.979947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-04-26 09:03:45.979989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.759 [2024-04-26 09:03:45.980029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.759 [2024-04-26 09:03:45.980371] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.759 [2024-04-26 09:03:45.980535] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.759 [2024-04-26 09:03:45.980546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.759 [2024-04-26 09:03:45.980554] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.759 [2024-04-26 09:03:45.983004] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.759 [2024-04-26 09:03:45.991509] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.759 [2024-04-26 09:03:45.992194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-04-26 09:03:45.992678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.759 [2024-04-26 09:03:45.992719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:28.759 [2024-04-26 09:03:45.992752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:28.759 [2024-04-26 09:03:45.993312] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:28.759 [2024-04-26 09:03:45.993474] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.759 [2024-04-26 09:03:45.993485] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.759 [2024-04-26 09:03:45.993493] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.759 [2024-04-26 09:03:45.995943] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.018 [2024-04-26 09:03:46.004426] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.018 [2024-04-26 09:03:46.005103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.005655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.005697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.018 [2024-04-26 09:03:46.005730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.018 [2024-04-26 09:03:46.006175] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.018 [2024-04-26 09:03:46.006333] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.018 [2024-04-26 09:03:46.006344] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.018 [2024-04-26 09:03:46.006352] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.018 [2024-04-26 09:03:46.008939] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.018 [2024-04-26 09:03:46.017155] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.018 [2024-04-26 09:03:46.017836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.018341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.018381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.018 [2024-04-26 09:03:46.018413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.018 [2024-04-26 09:03:46.018915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.018 [2024-04-26 09:03:46.019073] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.018 [2024-04-26 09:03:46.019084] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.018 [2024-04-26 09:03:46.019093] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.018 [2024-04-26 09:03:46.021542] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.018 [2024-04-26 09:03:46.030038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.018 [2024-04-26 09:03:46.030726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.031222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.031235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.018 [2024-04-26 09:03:46.031245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.018 [2024-04-26 09:03:46.031410] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.018 [2024-04-26 09:03:46.031600] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.018 [2024-04-26 09:03:46.031612] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.018 [2024-04-26 09:03:46.031621] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.018 [2024-04-26 09:03:46.034429] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.018 [2024-04-26 09:03:46.042885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.018 [2024-04-26 09:03:46.043499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.043995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.044008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.018 [2024-04-26 09:03:46.044018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.018 [2024-04-26 09:03:46.044189] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.018 [2024-04-26 09:03:46.044359] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.018 [2024-04-26 09:03:46.044371] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.018 [2024-04-26 09:03:46.044381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.018 [2024-04-26 09:03:46.047041] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.018 [2024-04-26 09:03:46.055806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.018 [2024-04-26 09:03:46.056480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.056901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.056914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.018 [2024-04-26 09:03:46.056924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.018 [2024-04-26 09:03:46.057094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.018 [2024-04-26 09:03:46.057267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.018 [2024-04-26 09:03:46.057279] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.018 [2024-04-26 09:03:46.057288] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.018 [2024-04-26 09:03:46.059945] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.018 [2024-04-26 09:03:46.068736] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.018 [2024-04-26 09:03:46.069427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.069850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.069864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.018 [2024-04-26 09:03:46.069873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.018 [2024-04-26 09:03:46.070043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.018 [2024-04-26 09:03:46.070213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.018 [2024-04-26 09:03:46.070224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.018 [2024-04-26 09:03:46.070235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.018 [2024-04-26 09:03:46.072899] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.018 [2024-04-26 09:03:46.081675] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.018 [2024-04-26 09:03:46.082275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.082772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.082786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.018 [2024-04-26 09:03:46.082796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.018 [2024-04-26 09:03:46.082966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.018 [2024-04-26 09:03:46.083136] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.018 [2024-04-26 09:03:46.083147] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.018 [2024-04-26 09:03:46.083156] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.018 [2024-04-26 09:03:46.085813] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.018 [2024-04-26 09:03:46.094604] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.018 [2024-04-26 09:03:46.095222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.095693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.095707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.018 [2024-04-26 09:03:46.095717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.018 [2024-04-26 09:03:46.095887] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.018 [2024-04-26 09:03:46.096062] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.018 [2024-04-26 09:03:46.096077] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.018 [2024-04-26 09:03:46.096086] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.018 [2024-04-26 09:03:46.098747] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.018 [2024-04-26 09:03:46.107567] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.018 [2024-04-26 09:03:46.108258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.018 [2024-04-26 09:03:46.108774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.108789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.019 [2024-04-26 09:03:46.108799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.019 [2024-04-26 09:03:46.108968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.019 [2024-04-26 09:03:46.109138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.019 [2024-04-26 09:03:46.109150] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.019 [2024-04-26 09:03:46.109159] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.019 [2024-04-26 09:03:46.111823] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.019 [2024-04-26 09:03:46.120458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.019 [2024-04-26 09:03:46.121171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.121672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.121686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.019 [2024-04-26 09:03:46.121696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.019 [2024-04-26 09:03:46.121865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.019 [2024-04-26 09:03:46.122035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.019 [2024-04-26 09:03:46.122046] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.019 [2024-04-26 09:03:46.122056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.019 [2024-04-26 09:03:46.124719] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.019 [2024-04-26 09:03:46.133340] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.019 [2024-04-26 09:03:46.133961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.134472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.134486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.019 [2024-04-26 09:03:46.134495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.019 [2024-04-26 09:03:46.134665] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.019 [2024-04-26 09:03:46.134835] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.019 [2024-04-26 09:03:46.134847] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.019 [2024-04-26 09:03:46.134860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.019 [2024-04-26 09:03:46.137524] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.019 [2024-04-26 09:03:46.146304] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.019 [2024-04-26 09:03:46.146976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.147457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.147472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.019 [2024-04-26 09:03:46.147482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.019 [2024-04-26 09:03:46.147651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.019 [2024-04-26 09:03:46.147821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.019 [2024-04-26 09:03:46.147833] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.019 [2024-04-26 09:03:46.147842] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.019 [2024-04-26 09:03:46.150501] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.019 [2024-04-26 09:03:46.159273] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.019 [2024-04-26 09:03:46.159949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.160364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.160378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.019 [2024-04-26 09:03:46.160388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.019 [2024-04-26 09:03:46.160563] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.019 [2024-04-26 09:03:46.160733] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.019 [2024-04-26 09:03:46.160745] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.019 [2024-04-26 09:03:46.160754] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.019 [2024-04-26 09:03:46.163414] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.019 [2024-04-26 09:03:46.172198] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.019 [2024-04-26 09:03:46.172812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.173254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.173268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.019 [2024-04-26 09:03:46.173278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.019 [2024-04-26 09:03:46.173448] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.019 [2024-04-26 09:03:46.173625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.019 [2024-04-26 09:03:46.173637] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.019 [2024-04-26 09:03:46.173646] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.019 [2024-04-26 09:03:46.176314] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.019 [2024-04-26 09:03:46.185112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.019 [2024-04-26 09:03:46.185783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.186298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.186311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.019 [2024-04-26 09:03:46.186322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.019 [2024-04-26 09:03:46.186496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.019 [2024-04-26 09:03:46.186667] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.019 [2024-04-26 09:03:46.186679] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.019 [2024-04-26 09:03:46.186688] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.019 [2024-04-26 09:03:46.189344] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.019 [2024-04-26 09:03:46.197972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.019 [2024-04-26 09:03:46.198596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.199017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.199031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.019 [2024-04-26 09:03:46.199041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.019 [2024-04-26 09:03:46.199211] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.019 [2024-04-26 09:03:46.199380] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.019 [2024-04-26 09:03:46.199391] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.019 [2024-04-26 09:03:46.199401] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.019 [2024-04-26 09:03:46.202064] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.019 [2024-04-26 09:03:46.210857] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.019 [2024-04-26 09:03:46.211551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.212027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.212041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.019 [2024-04-26 09:03:46.212051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.019 [2024-04-26 09:03:46.212221] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.019 [2024-04-26 09:03:46.212392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.019 [2024-04-26 09:03:46.212403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.019 [2024-04-26 09:03:46.212412] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.019 [2024-04-26 09:03:46.215082] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.019 [2024-04-26 09:03:46.223713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.019 [2024-04-26 09:03:46.224409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.224909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.019 [2024-04-26 09:03:46.224922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.020 [2024-04-26 09:03:46.224932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.020 [2024-04-26 09:03:46.225102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.020 [2024-04-26 09:03:46.225272] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.020 [2024-04-26 09:03:46.225282] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.020 [2024-04-26 09:03:46.225291] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.020 [2024-04-26 09:03:46.227953] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.020 [2024-04-26 09:03:46.236573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.020 [2024-04-26 09:03:46.237195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.020 [2024-04-26 09:03:46.237683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.020 [2024-04-26 09:03:46.237698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.020 [2024-04-26 09:03:46.237708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.020 [2024-04-26 09:03:46.237876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.020 [2024-04-26 09:03:46.238046] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.020 [2024-04-26 09:03:46.238058] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.020 [2024-04-26 09:03:46.238067] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.020 [2024-04-26 09:03:46.240730] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.020 [2024-04-26 09:03:46.249515] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.020 [2024-04-26 09:03:46.250203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.020 [2024-04-26 09:03:46.250679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.020 [2024-04-26 09:03:46.250693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.020 [2024-04-26 09:03:46.250703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.020 [2024-04-26 09:03:46.250872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.020 [2024-04-26 09:03:46.251042] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.020 [2024-04-26 09:03:46.251054] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.020 [2024-04-26 09:03:46.251063] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.020 [2024-04-26 09:03:46.253722] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.020 [2024-04-26 09:03:46.262509] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.020 [2024-04-26 09:03:46.263216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.020 [2024-04-26 09:03:46.263804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.020 [2024-04-26 09:03:46.263843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.020 [2024-04-26 09:03:46.263853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.279 [2024-04-26 09:03:46.264023] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.279 [2024-04-26 09:03:46.264194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.279 [2024-04-26 09:03:46.264206] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.279 [2024-04-26 09:03:46.264215] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.279 [2024-04-26 09:03:46.266883] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.279 [2024-04-26 09:03:46.275513] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.279 [2024-04-26 09:03:46.276076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.276580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.276623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.279 [2024-04-26 09:03:46.276656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.279 [2024-04-26 09:03:46.276912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.279 [2024-04-26 09:03:46.277084] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.279 [2024-04-26 09:03:46.277095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.279 [2024-04-26 09:03:46.277104] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.279 [2024-04-26 09:03:46.279768] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.279 [2024-04-26 09:03:46.288289] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.279 [2024-04-26 09:03:46.288960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.289443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.289496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.279 [2024-04-26 09:03:46.289529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.279 [2024-04-26 09:03:46.289870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.279 [2024-04-26 09:03:46.290036] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.279 [2024-04-26 09:03:46.290047] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.279 [2024-04-26 09:03:46.290058] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.279 [2024-04-26 09:03:46.292636] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.279 [2024-04-26 09:03:46.301161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.279 [2024-04-26 09:03:46.301793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.302223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.302273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.279 [2024-04-26 09:03:46.302306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.279 [2024-04-26 09:03:46.302717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.279 [2024-04-26 09:03:46.302884] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.279 [2024-04-26 09:03:46.302895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.279 [2024-04-26 09:03:46.302904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.279 [2024-04-26 09:03:46.305492] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.279 [2024-04-26 09:03:46.313853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.279 [2024-04-26 09:03:46.314519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.315017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.315057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.279 [2024-04-26 09:03:46.315091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.279 [2024-04-26 09:03:46.315623] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.279 [2024-04-26 09:03:46.315782] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.279 [2024-04-26 09:03:46.315794] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.279 [2024-04-26 09:03:46.315803] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.279 [2024-04-26 09:03:46.318252] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.279 [2024-04-26 09:03:46.326531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.279 [2024-04-26 09:03:46.327144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.327731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.327774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.279 [2024-04-26 09:03:46.327807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.279 [2024-04-26 09:03:46.328381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.279 [2024-04-26 09:03:46.328625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.279 [2024-04-26 09:03:46.328641] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.279 [2024-04-26 09:03:46.328653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.279 [2024-04-26 09:03:46.332383] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.279 [2024-04-26 09:03:46.339784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.279 [2024-04-26 09:03:46.340482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.340936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.340977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.279 [2024-04-26 09:03:46.341018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.279 [2024-04-26 09:03:46.341618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.279 [2024-04-26 09:03:46.341941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.279 [2024-04-26 09:03:46.341952] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.279 [2024-04-26 09:03:46.341961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.279 [2024-04-26 09:03:46.344483] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.279 [2024-04-26 09:03:46.352548] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.279 [2024-04-26 09:03:46.353136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.353696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.353739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.279 [2024-04-26 09:03:46.353772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.279 [2024-04-26 09:03:46.354198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.279 [2024-04-26 09:03:46.354357] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.279 [2024-04-26 09:03:46.354368] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.279 [2024-04-26 09:03:46.354377] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.279 [2024-04-26 09:03:46.356867] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.279 [2024-04-26 09:03:46.365505] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.279 [2024-04-26 09:03:46.366200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.366598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.366613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.279 [2024-04-26 09:03:46.366623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.279 [2024-04-26 09:03:46.366792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.279 [2024-04-26 09:03:46.366962] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.279 [2024-04-26 09:03:46.366974] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.279 [2024-04-26 09:03:46.366983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.279 [2024-04-26 09:03:46.369649] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.279 [2024-04-26 09:03:46.378448] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.279 [2024-04-26 09:03:46.379050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.279 [2024-04-26 09:03:46.379549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.379563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.280 [2024-04-26 09:03:46.379573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.280 [2024-04-26 09:03:46.379746] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.280 [2024-04-26 09:03:46.379916] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.280 [2024-04-26 09:03:46.379928] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.280 [2024-04-26 09:03:46.379937] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.280 [2024-04-26 09:03:46.382641] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.280 [2024-04-26 09:03:46.391433] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.280 [2024-04-26 09:03:46.391993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.392516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.392530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.280 [2024-04-26 09:03:46.392540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.280 [2024-04-26 09:03:46.392710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.280 [2024-04-26 09:03:46.392879] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.280 [2024-04-26 09:03:46.392891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.280 [2024-04-26 09:03:46.392900] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.280 [2024-04-26 09:03:46.395557] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.280 [2024-04-26 09:03:46.404348] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.280 [2024-04-26 09:03:46.405027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.405460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.405474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.280 [2024-04-26 09:03:46.405483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.280 [2024-04-26 09:03:46.405653] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.280 [2024-04-26 09:03:46.405823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.280 [2024-04-26 09:03:46.405835] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.280 [2024-04-26 09:03:46.405844] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.280 [2024-04-26 09:03:46.408504] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.280 [2024-04-26 09:03:46.417288] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.280 [2024-04-26 09:03:46.417989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.418491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.418505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.280 [2024-04-26 09:03:46.418515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.280 [2024-04-26 09:03:46.418686] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.280 [2024-04-26 09:03:46.418859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.280 [2024-04-26 09:03:46.418871] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.280 [2024-04-26 09:03:46.418880] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.280 [2024-04-26 09:03:46.421545] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.280 [2024-04-26 09:03:46.430179] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.280 [2024-04-26 09:03:46.430890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.431411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.431424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.280 [2024-04-26 09:03:46.431435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.280 [2024-04-26 09:03:46.431612] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.280 [2024-04-26 09:03:46.431784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.280 [2024-04-26 09:03:46.431796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.280 [2024-04-26 09:03:46.431806] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.280 [2024-04-26 09:03:46.434470] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.280 [2024-04-26 09:03:46.443106] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.280 [2024-04-26 09:03:46.443782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.444235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.444249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.280 [2024-04-26 09:03:46.444259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.280 [2024-04-26 09:03:46.444429] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.280 [2024-04-26 09:03:46.444607] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.280 [2024-04-26 09:03:46.444619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.280 [2024-04-26 09:03:46.444629] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.280 [2024-04-26 09:03:46.447492] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.280 [2024-04-26 09:03:46.455977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.280 [2024-04-26 09:03:46.456631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.457119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.457160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.280 [2024-04-26 09:03:46.457193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.280 [2024-04-26 09:03:46.457537] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.280 [2024-04-26 09:03:46.457709] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.280 [2024-04-26 09:03:46.457724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.280 [2024-04-26 09:03:46.457733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.280 [2024-04-26 09:03:46.460397] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.280 [2024-04-26 09:03:46.468882] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.280 [2024-04-26 09:03:46.469506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.469940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.469981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.280 [2024-04-26 09:03:46.470014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.280 [2024-04-26 09:03:46.470615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.280 [2024-04-26 09:03:46.471213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.280 [2024-04-26 09:03:46.471229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.280 [2024-04-26 09:03:46.471241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.280 [2024-04-26 09:03:46.474971] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.280 [2024-04-26 09:03:46.482036] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.280 [2024-04-26 09:03:46.482749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.483318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.483358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.280 [2024-04-26 09:03:46.483391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.280 [2024-04-26 09:03:46.483993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.280 [2024-04-26 09:03:46.484461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.280 [2024-04-26 09:03:46.484472] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.280 [2024-04-26 09:03:46.484482] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.280 [2024-04-26 09:03:46.487033] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.280 [2024-04-26 09:03:46.494795] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.280 [2024-04-26 09:03:46.495386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.495873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.280 [2024-04-26 09:03:46.495915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.280 [2024-04-26 09:03:46.495947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.281 [2024-04-26 09:03:46.496314] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.281 [2024-04-26 09:03:46.496477] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.281 [2024-04-26 09:03:46.496489] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.281 [2024-04-26 09:03:46.496503] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.281 [2024-04-26 09:03:46.498954] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.281 [2024-04-26 09:03:46.507501] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.281 [2024-04-26 09:03:46.508147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-04-26 09:03:46.508685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-04-26 09:03:46.508727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.281 [2024-04-26 09:03:46.508760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.281 [2024-04-26 09:03:46.509283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.281 [2024-04-26 09:03:46.509441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.281 [2024-04-26 09:03:46.509458] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.281 [2024-04-26 09:03:46.509466] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.281 [2024-04-26 09:03:46.511919] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.281 [2024-04-26 09:03:46.520177] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.281 [2024-04-26 09:03:46.520840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-04-26 09:03:46.521269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.281 [2024-04-26 09:03:46.521283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.281 [2024-04-26 09:03:46.521293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.281 [2024-04-26 09:03:46.521463] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.281 [2024-04-26 09:03:46.521650] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.281 [2024-04-26 09:03:46.521662] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.281 [2024-04-26 09:03:46.521671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.281 [2024-04-26 09:03:46.524366] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.540 [2024-04-26 09:03:46.533006] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.540 [2024-04-26 09:03:46.533565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-04-26 09:03:46.534128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-04-26 09:03:46.534169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.540 [2024-04-26 09:03:46.534202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.540 [2024-04-26 09:03:46.534397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.540 [2024-04-26 09:03:46.534586] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.540 [2024-04-26 09:03:46.534598] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.540 [2024-04-26 09:03:46.534608] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.540 [2024-04-26 09:03:46.537273] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.540 [2024-04-26 09:03:46.545915] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.540 [2024-04-26 09:03:46.546632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-04-26 09:03:46.547152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-04-26 09:03:46.547166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.540 [2024-04-26 09:03:46.547176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.540 [2024-04-26 09:03:46.547346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.540 [2024-04-26 09:03:46.547522] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.540 [2024-04-26 09:03:46.547534] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.540 [2024-04-26 09:03:46.547544] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.540 [2024-04-26 09:03:46.550200] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.540 [2024-04-26 09:03:46.558827] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.540 [2024-04-26 09:03:46.559540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-04-26 09:03:46.559921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-04-26 09:03:46.559935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.540 [2024-04-26 09:03:46.559946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.540 [2024-04-26 09:03:46.560116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.540 [2024-04-26 09:03:46.560288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.540 [2024-04-26 09:03:46.560300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.540 [2024-04-26 09:03:46.560309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.540 [2024-04-26 09:03:46.562912] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.540 [2024-04-26 09:03:46.571670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.540 [2024-04-26 09:03:46.572336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-04-26 09:03:46.572787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-04-26 09:03:46.572801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.540 [2024-04-26 09:03:46.572810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.541 [2024-04-26 09:03:46.572975] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.541 [2024-04-26 09:03:46.573141] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.541 [2024-04-26 09:03:46.573152] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.541 [2024-04-26 09:03:46.573161] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.541 [2024-04-26 09:03:46.575804] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.541 [2024-04-26 09:03:46.584499] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.541 [2024-04-26 09:03:46.585126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.585599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.585613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.541 [2024-04-26 09:03:46.585622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.541 [2024-04-26 09:03:46.585788] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.541 [2024-04-26 09:03:46.585953] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.541 [2024-04-26 09:03:46.585964] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.541 [2024-04-26 09:03:46.585973] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.541 [2024-04-26 09:03:46.588571] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.541 [2024-04-26 09:03:46.597370] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.541 [2024-04-26 09:03:46.598038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.598467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.598480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.541 [2024-04-26 09:03:46.598506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.541 [2024-04-26 09:03:46.598675] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.541 [2024-04-26 09:03:46.598845] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.541 [2024-04-26 09:03:46.598857] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.541 [2024-04-26 09:03:46.598866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.541 [2024-04-26 09:03:46.601523] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.541 [2024-04-26 09:03:46.610303] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.541 [2024-04-26 09:03:46.610999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.611442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.611462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.541 [2024-04-26 09:03:46.611472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.541 [2024-04-26 09:03:46.611642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.541 [2024-04-26 09:03:46.611812] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.541 [2024-04-26 09:03:46.611823] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.541 [2024-04-26 09:03:46.611832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.541 [2024-04-26 09:03:46.614507] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.541 [2024-04-26 09:03:46.623200] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.541 [2024-04-26 09:03:46.623823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.624323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.624336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.541 [2024-04-26 09:03:46.624346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.541 [2024-04-26 09:03:46.624515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.541 [2024-04-26 09:03:46.624681] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.541 [2024-04-26 09:03:46.624693] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.541 [2024-04-26 09:03:46.624701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.541 [2024-04-26 09:03:46.627280] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.541 [2024-04-26 09:03:46.636117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.541 [2024-04-26 09:03:46.636802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.637304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.637317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.541 [2024-04-26 09:03:46.637327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.541 [2024-04-26 09:03:46.637495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.541 [2024-04-26 09:03:46.637661] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.541 [2024-04-26 09:03:46.637673] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.541 [2024-04-26 09:03:46.637682] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.541 [2024-04-26 09:03:46.640267] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.541 [2024-04-26 09:03:46.648931] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.541 [2024-04-26 09:03:46.649630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.650215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.650256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.541 [2024-04-26 09:03:46.650289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.541 [2024-04-26 09:03:46.650714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.541 [2024-04-26 09:03:46.650872] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.541 [2024-04-26 09:03:46.650883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.541 [2024-04-26 09:03:46.650891] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.541 [2024-04-26 09:03:46.653361] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.541 [2024-04-26 09:03:46.661772] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.541 [2024-04-26 09:03:46.662476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.663037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.663085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.541 [2024-04-26 09:03:46.663118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.541 [2024-04-26 09:03:46.663569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.541 [2024-04-26 09:03:46.663735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.541 [2024-04-26 09:03:46.663747] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.541 [2024-04-26 09:03:46.663756] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.541 [2024-04-26 09:03:46.666338] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.541 [2024-04-26 09:03:46.674569] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.541 [2024-04-26 09:03:46.675263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.675863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.675906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.541 [2024-04-26 09:03:46.675939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.541 [2024-04-26 09:03:46.676535] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.541 [2024-04-26 09:03:46.676941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.541 [2024-04-26 09:03:46.676953] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.541 [2024-04-26 09:03:46.676962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.541 [2024-04-26 09:03:46.679548] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.541 [2024-04-26 09:03:46.687435] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.541 [2024-04-26 09:03:46.688156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.688670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-04-26 09:03:46.688713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.541 [2024-04-26 09:03:46.688746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.541 [2024-04-26 09:03:46.689333] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.541 [2024-04-26 09:03:46.689911] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.541 [2024-04-26 09:03:46.689923] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.542 [2024-04-26 09:03:46.689931] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.542 [2024-04-26 09:03:46.692620] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.542 [2024-04-26 09:03:46.700340] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.542 [2024-04-26 09:03:46.701031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.701544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.701586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.542 [2024-04-26 09:03:46.701636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.542 [2024-04-26 09:03:46.701793] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.542 [2024-04-26 09:03:46.701951] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.542 [2024-04-26 09:03:46.701962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.542 [2024-04-26 09:03:46.701970] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.542 [2024-04-26 09:03:46.704419] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.542 [2024-04-26 09:03:46.713042] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.542 [2024-04-26 09:03:46.713734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.714294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.714335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.542 [2024-04-26 09:03:46.714368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.542 [2024-04-26 09:03:46.714856] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.542 [2024-04-26 09:03:46.715016] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.542 [2024-04-26 09:03:46.715026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.542 [2024-04-26 09:03:46.715035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.542 [2024-04-26 09:03:46.717484] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.542 [2024-04-26 09:03:46.725716] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.542 [2024-04-26 09:03:46.726402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.726997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.727039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.542 [2024-04-26 09:03:46.727072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.542 [2024-04-26 09:03:46.727673] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.542 [2024-04-26 09:03:46.728090] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.542 [2024-04-26 09:03:46.728101] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.542 [2024-04-26 09:03:46.728110] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.542 [2024-04-26 09:03:46.730562] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.542 [2024-04-26 09:03:46.738356] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.542 [2024-04-26 09:03:46.739045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.739629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.739671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.542 [2024-04-26 09:03:46.739705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.542 [2024-04-26 09:03:46.740031] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.542 [2024-04-26 09:03:46.740190] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.542 [2024-04-26 09:03:46.740201] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.542 [2024-04-26 09:03:46.740209] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.542 [2024-04-26 09:03:46.742680] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.542 [2024-04-26 09:03:46.751102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.542 [2024-04-26 09:03:46.751714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.752270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.752311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.542 [2024-04-26 09:03:46.752344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.542 [2024-04-26 09:03:46.752751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.542 [2024-04-26 09:03:46.752989] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.542 [2024-04-26 09:03:46.753005] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.542 [2024-04-26 09:03:46.753017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.542 [2024-04-26 09:03:46.756733] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.542 [2024-04-26 09:03:46.764415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.542 [2024-04-26 09:03:46.765104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.765692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.765727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.542 [2024-04-26 09:03:46.765737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.542 [2024-04-26 09:03:46.765893] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.542 [2024-04-26 09:03:46.766051] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.542 [2024-04-26 09:03:46.766061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.542 [2024-04-26 09:03:46.766070] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.542 [2024-04-26 09:03:46.768521] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.542 [2024-04-26 09:03:46.777179] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.542 [2024-04-26 09:03:46.777874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.778466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-04-26 09:03:46.778507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.542 [2024-04-26 09:03:46.778539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.542 [2024-04-26 09:03:46.778831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.542 [2024-04-26 09:03:46.778992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.542 [2024-04-26 09:03:46.779003] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.542 [2024-04-26 09:03:46.779011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.542 [2024-04-26 09:03:46.781461] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.802 [2024-04-26 09:03:46.789939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.802 [2024-04-26 09:03:46.790624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.791198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.791244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.802 [2024-04-26 09:03:46.791277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.802 [2024-04-26 09:03:46.791878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.802 [2024-04-26 09:03:46.792093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.802 [2024-04-26 09:03:46.792104] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.802 [2024-04-26 09:03:46.792113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.802 [2024-04-26 09:03:46.794684] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.802 [2024-04-26 09:03:46.802696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.802 [2024-04-26 09:03:46.803311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.803872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.803914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.802 [2024-04-26 09:03:46.803947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.802 [2024-04-26 09:03:46.804545] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.802 [2024-04-26 09:03:46.805089] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.802 [2024-04-26 09:03:46.805101] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.802 [2024-04-26 09:03:46.805109] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.802 [2024-04-26 09:03:46.807560] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.802 [2024-04-26 09:03:46.815358] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.802 [2024-04-26 09:03:46.816046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.816629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.816671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.802 [2024-04-26 09:03:46.816704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.802 [2024-04-26 09:03:46.817107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.802 [2024-04-26 09:03:46.817265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.802 [2024-04-26 09:03:46.817279] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.802 [2024-04-26 09:03:46.817288] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.802 [2024-04-26 09:03:46.819740] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.802 [2024-04-26 09:03:46.828110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.802 [2024-04-26 09:03:46.828792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.829379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.829419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.802 [2024-04-26 09:03:46.829466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.802 [2024-04-26 09:03:46.830053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.802 [2024-04-26 09:03:46.830412] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.802 [2024-04-26 09:03:46.830423] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.802 [2024-04-26 09:03:46.830433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.802 [2024-04-26 09:03:46.832880] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.802 [2024-04-26 09:03:46.840818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.802 [2024-04-26 09:03:46.841505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.842061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.842101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.802 [2024-04-26 09:03:46.842134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.802 [2024-04-26 09:03:46.842443] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.802 [2024-04-26 09:03:46.842608] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.802 [2024-04-26 09:03:46.842619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.802 [2024-04-26 09:03:46.842627] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.802 [2024-04-26 09:03:46.845073] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.802 [2024-04-26 09:03:46.853581] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.802 [2024-04-26 09:03:46.854271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.854848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.802 [2024-04-26 09:03:46.854891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.802 [2024-04-26 09:03:46.854923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.802 [2024-04-26 09:03:46.855419] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.802 [2024-04-26 09:03:46.855582] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.802 [2024-04-26 09:03:46.855593] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.802 [2024-04-26 09:03:46.855605] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.802 [2024-04-26 09:03:46.858055] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2230433 Killed "${NVMF_APP[@]}" "$@" 00:29:29.802 09:03:46 -- host/bdevperf.sh@36 -- # tgt_init 00:29:29.803 09:03:46 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:29.803 09:03:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:29.803 09:03:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:29.803 09:03:46 -- common/autotest_common.sh@10 -- # set +x 00:29:29.803 [2024-04-26 09:03:46.866480] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.803 [2024-04-26 09:03:46.867091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.867615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.867628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.803 [2024-04-26 09:03:46.867638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.803 [2024-04-26 09:03:46.867803] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.803 [2024-04-26 09:03:46.867968] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.803 [2024-04-26 09:03:46.867979] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.803 [2024-04-26 09:03:46.867988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.803 [2024-04-26 09:03:46.870654] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.803 09:03:46 -- nvmf/common.sh@470 -- # nvmfpid=2232011 00:29:29.803 09:03:46 -- nvmf/common.sh@471 -- # waitforlisten 2232011 00:29:29.803 09:03:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:29.803 09:03:46 -- common/autotest_common.sh@817 -- # '[' -z 2232011 ']' 00:29:29.803 09:03:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.803 09:03:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:29.803 09:03:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.803 09:03:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:29.803 09:03:46 -- common/autotest_common.sh@10 -- # set +x 00:29:29.803 [2024-04-26 09:03:46.879437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.803 [2024-04-26 09:03:46.880135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.880558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.880572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.803 [2024-04-26 09:03:46.880583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.803 [2024-04-26 09:03:46.880752] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.803 [2024-04-26 09:03:46.880923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.803 [2024-04-26 09:03:46.880935] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.803 [2024-04-26 09:03:46.880945] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.803 [2024-04-26 09:03:46.883610] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.803 [2024-04-26 09:03:46.892396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.803 [2024-04-26 09:03:46.893069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.893590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.893605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.803 [2024-04-26 09:03:46.893615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.803 [2024-04-26 09:03:46.893782] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.803 [2024-04-26 09:03:46.893947] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.803 [2024-04-26 09:03:46.893959] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.803 [2024-04-26 09:03:46.893968] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.803 [2024-04-26 09:03:46.896636] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.803 [2024-04-26 09:03:46.905321] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.803 [2024-04-26 09:03:46.905990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.906488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.906501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.803 [2024-04-26 09:03:46.906511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.803 [2024-04-26 09:03:46.906676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.803 [2024-04-26 09:03:46.906841] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.803 [2024-04-26 09:03:46.906853] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.803 [2024-04-26 09:03:46.906862] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.803 [2024-04-26 09:03:46.909459] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.803 [2024-04-26 09:03:46.918184] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.803 [2024-04-26 09:03:46.918602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.919104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.919116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.803 [2024-04-26 09:03:46.919126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.803 [2024-04-26 09:03:46.919283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.803 [2024-04-26 09:03:46.919440] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.803 [2024-04-26 09:03:46.919459] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.803 [2024-04-26 09:03:46.919468] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.803 [2024-04-26 09:03:46.920000] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:29:29.803 [2024-04-26 09:03:46.920047] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.803 [2024-04-26 09:03:46.922066] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.803 [2024-04-26 09:03:46.931125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.803 [2024-04-26 09:03:46.931814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.932285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.932297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.803 [2024-04-26 09:03:46.932307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.803 [2024-04-26 09:03:46.932468] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.803 [2024-04-26 09:03:46.932650] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.803 [2024-04-26 09:03:46.932662] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.803 [2024-04-26 09:03:46.932671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.803 [2024-04-26 09:03:46.935257] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.803 [2024-04-26 09:03:46.944010] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.803 [2024-04-26 09:03:46.944717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.945224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.945236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.803 [2024-04-26 09:03:46.945246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.803 [2024-04-26 09:03:46.945402] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.803 [2024-04-26 09:03:46.945585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.803 [2024-04-26 09:03:46.945597] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.803 [2024-04-26 09:03:46.945606] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.803 [2024-04-26 09:03:46.948269] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.803 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.803 [2024-04-26 09:03:46.956925] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.803 [2024-04-26 09:03:46.957364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.957859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.803 [2024-04-26 09:03:46.957873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.803 [2024-04-26 09:03:46.957882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.803 [2024-04-26 09:03:46.958048] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.803 [2024-04-26 09:03:46.958213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.803 [2024-04-26 09:03:46.958224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.803 [2024-04-26 09:03:46.958236] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.803 [2024-04-26 09:03:46.960878] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.803 [2024-04-26 09:03:46.969735] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.804 [2024-04-26 09:03:46.970160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:46.970588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:46.970602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.804 [2024-04-26 09:03:46.970612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.804 [2024-04-26 09:03:46.970778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.804 [2024-04-26 09:03:46.970944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.804 [2024-04-26 09:03:46.970955] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.804 [2024-04-26 09:03:46.970964] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.804 [2024-04-26 09:03:46.973552] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.804 [2024-04-26 09:03:46.982548] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.804 [2024-04-26 09:03:46.983229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:46.983702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:46.983715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.804 [2024-04-26 09:03:46.983725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.804 [2024-04-26 09:03:46.983891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.804 [2024-04-26 09:03:46.984057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.804 [2024-04-26 09:03:46.984068] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.804 [2024-04-26 09:03:46.984077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.804 [2024-04-26 09:03:46.986668] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.804 [2024-04-26 09:03:46.995055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:29.804 [2024-04-26 09:03:46.995361] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.804 [2024-04-26 09:03:46.995965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:46.996465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:46.996479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.804 [2024-04-26 09:03:46.996490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.804 [2024-04-26 09:03:46.996659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.804 [2024-04-26 09:03:46.996815] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.804 [2024-04-26 09:03:46.996827] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.804 [2024-04-26 09:03:46.996835] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.804 [2024-04-26 09:03:46.999440] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.804 [2024-04-26 09:03:47.008151] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.804 [2024-04-26 09:03:47.008842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:47.009336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:47.009349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.804 [2024-04-26 09:03:47.009359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.804 [2024-04-26 09:03:47.009527] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.804 [2024-04-26 09:03:47.009693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.804 [2024-04-26 09:03:47.009705] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.804 [2024-04-26 09:03:47.009714] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.804 [2024-04-26 09:03:47.012298] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.804 [2024-04-26 09:03:47.020995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.804 [2024-04-26 09:03:47.021681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:47.022175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:47.022189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.804 [2024-04-26 09:03:47.022199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.804 [2024-04-26 09:03:47.022364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.804 [2024-04-26 09:03:47.022535] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.804 [2024-04-26 09:03:47.022548] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.804 [2024-04-26 09:03:47.022558] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.804 [2024-04-26 09:03:47.025139] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.804 [2024-04-26 09:03:47.033845] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.804 [2024-04-26 09:03:47.034560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:47.035061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.804 [2024-04-26 09:03:47.035075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:29.804 [2024-04-26 09:03:47.035085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:29.804 [2024-04-26 09:03:47.035245] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:29.804 [2024-04-26 09:03:47.035403] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:29.804 [2024-04-26 09:03:47.035415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:29.804 [2024-04-26 09:03:47.035424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:29.804 [2024-04-26 09:03:47.038028] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:29.804 [2024-04-26 09:03:47.046842] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:29.804 [2024-04-26 09:03:47.047547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.048029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.048043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.065 [2024-04-26 09:03:47.048053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.065 [2024-04-26 09:03:47.048222] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.065 [2024-04-26 09:03:47.048393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.065 [2024-04-26 09:03:47.048404] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.065 [2024-04-26 09:03:47.048414] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.065 [2024-04-26 09:03:47.051054] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.065 [2024-04-26 09:03:47.059687] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.065 [2024-04-26 09:03:47.060376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.060821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.060835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.065 [2024-04-26 09:03:47.060846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.065 [2024-04-26 09:03:47.061010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.065 [2024-04-26 09:03:47.061176] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.065 [2024-04-26 09:03:47.061187] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.065 [2024-04-26 09:03:47.061196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.065 [2024-04-26 09:03:47.063780] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.065 [2024-04-26 09:03:47.063873] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.065 [2024-04-26 09:03:47.063899] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.065 [2024-04-26 09:03:47.063909] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.065 [2024-04-26 09:03:47.063917] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.065 [2024-04-26 09:03:47.063924] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.065 [2024-04-26 09:03:47.063962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.065 [2024-04-26 09:03:47.064048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.065 [2024-04-26 09:03:47.064050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.065 [2024-04-26 09:03:47.072579] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.065 [2024-04-26 09:03:47.073301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.073822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.073838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.065 [2024-04-26 09:03:47.073849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.065 [2024-04-26 09:03:47.074028] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.065 [2024-04-26 09:03:47.074200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.065 [2024-04-26 09:03:47.074213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.065 [2024-04-26 09:03:47.074223] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.065 [2024-04-26 09:03:47.076881] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.065 [2024-04-26 09:03:47.085520] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.065 [2024-04-26 09:03:47.086237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.086587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.086604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.065 [2024-04-26 09:03:47.086615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.065 [2024-04-26 09:03:47.086791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.065 [2024-04-26 09:03:47.086963] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.065 [2024-04-26 09:03:47.086975] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.065 [2024-04-26 09:03:47.086985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.065 [2024-04-26 09:03:47.089647] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.065 [2024-04-26 09:03:47.098433] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.065 [2024-04-26 09:03:47.099148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.099668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.099682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.065 [2024-04-26 09:03:47.099694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.065 [2024-04-26 09:03:47.099866] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.065 [2024-04-26 09:03:47.100038] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.065 [2024-04-26 09:03:47.100048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.065 [2024-04-26 09:03:47.100058] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.065 [2024-04-26 09:03:47.102715] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.065 [2024-04-26 09:03:47.111359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.065 [2024-04-26 09:03:47.112067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.112575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.065 [2024-04-26 09:03:47.112590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.065 [2024-04-26 09:03:47.112602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.065 [2024-04-26 09:03:47.112777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.065 [2024-04-26 09:03:47.112960] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.065 [2024-04-26 09:03:47.112972] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.066 [2024-04-26 09:03:47.112982] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.066 [2024-04-26 09:03:47.115836] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.066 [2024-04-26 09:03:47.124300] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.066 [2024-04-26 09:03:47.124940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.125440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.125458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-04-26 09:03:47.125471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.066 [2024-04-26 09:03:47.125644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.066 [2024-04-26 09:03:47.125816] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.066 [2024-04-26 09:03:47.125828] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.066 [2024-04-26 09:03:47.125839] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.066 [2024-04-26 09:03:47.128501] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.066 [2024-04-26 09:03:47.137273] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.066 [2024-04-26 09:03:47.137956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.138252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.138266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-04-26 09:03:47.138277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.066 [2024-04-26 09:03:47.138447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.066 [2024-04-26 09:03:47.138624] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.066 [2024-04-26 09:03:47.138636] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.066 [2024-04-26 09:03:47.138645] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.066 [2024-04-26 09:03:47.141301] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.066 [2024-04-26 09:03:47.150236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.066 [2024-04-26 09:03:47.150926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.151458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.151473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-04-26 09:03:47.151484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.066 [2024-04-26 09:03:47.151654] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.066 [2024-04-26 09:03:47.151824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.066 [2024-04-26 09:03:47.151841] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.066 [2024-04-26 09:03:47.151851] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.066 [2024-04-26 09:03:47.154512] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.066 [2024-04-26 09:03:47.163132] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.066 [2024-04-26 09:03:47.163840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.164295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.164309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-04-26 09:03:47.164319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.066 [2024-04-26 09:03:47.164494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.066 [2024-04-26 09:03:47.164664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.066 [2024-04-26 09:03:47.164676] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.066 [2024-04-26 09:03:47.164685] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.066 [2024-04-26 09:03:47.167339] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.066 [2024-04-26 09:03:47.176123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.066 [2024-04-26 09:03:47.176838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.177350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.177363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-04-26 09:03:47.177373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.066 [2024-04-26 09:03:47.177562] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.066 [2024-04-26 09:03:47.177738] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.066 [2024-04-26 09:03:47.177750] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.066 [2024-04-26 09:03:47.177759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.066 [2024-04-26 09:03:47.180417] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.066 [2024-04-26 09:03:47.189039] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.066 [2024-04-26 09:03:47.189716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.190122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.190136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-04-26 09:03:47.190146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.066 [2024-04-26 09:03:47.190317] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.066 [2024-04-26 09:03:47.190492] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.066 [2024-04-26 09:03:47.190505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.066 [2024-04-26 09:03:47.190517] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.066 [2024-04-26 09:03:47.193175] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.066 [2024-04-26 09:03:47.201959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.066 [2024-04-26 09:03:47.202631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.203078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.203092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-04-26 09:03:47.203102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.066 [2024-04-26 09:03:47.203272] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.066 [2024-04-26 09:03:47.203442] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.066 [2024-04-26 09:03:47.203459] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.066 [2024-04-26 09:03:47.203469] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.066 [2024-04-26 09:03:47.206128] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.066 [2024-04-26 09:03:47.214920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.066 [2024-04-26 09:03:47.215536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.216033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.066 [2024-04-26 09:03:47.216047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.066 [2024-04-26 09:03:47.216057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.066 [2024-04-26 09:03:47.216227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.067 [2024-04-26 09:03:47.216397] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.067 [2024-04-26 09:03:47.216409] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.067 [2024-04-26 09:03:47.216418] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.067 [2024-04-26 09:03:47.219076] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.067 [2024-04-26 09:03:47.227853] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.067 [2024-04-26 09:03:47.228542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.228913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.228927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-04-26 09:03:47.228937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.067 [2024-04-26 09:03:47.229107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.067 [2024-04-26 09:03:47.229278] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.067 [2024-04-26 09:03:47.229290] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.067 [2024-04-26 09:03:47.229299] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.067 [2024-04-26 09:03:47.231965] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.067 [2024-04-26 09:03:47.240749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.067 [2024-04-26 09:03:47.241437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.241910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.241924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-04-26 09:03:47.241934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.067 [2024-04-26 09:03:47.242103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.067 [2024-04-26 09:03:47.242274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.067 [2024-04-26 09:03:47.242285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.067 [2024-04-26 09:03:47.242294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.067 [2024-04-26 09:03:47.244953] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.067 [2024-04-26 09:03:47.253731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.067 [2024-04-26 09:03:47.254423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.254875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.254889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-04-26 09:03:47.254899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.067 [2024-04-26 09:03:47.255068] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.067 [2024-04-26 09:03:47.255238] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.067 [2024-04-26 09:03:47.255249] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.067 [2024-04-26 09:03:47.255258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.067 [2024-04-26 09:03:47.257916] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.067 [2024-04-26 09:03:47.266696] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.067 [2024-04-26 09:03:47.267123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.267495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.267509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-04-26 09:03:47.267519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.067 [2024-04-26 09:03:47.267688] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.067 [2024-04-26 09:03:47.267857] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.067 [2024-04-26 09:03:47.267869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.067 [2024-04-26 09:03:47.267878] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.067 [2024-04-26 09:03:47.270532] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.067 [2024-04-26 09:03:47.279626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.067 [2024-04-26 09:03:47.280316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.280811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.280826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-04-26 09:03:47.280836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.067 [2024-04-26 09:03:47.281005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.067 [2024-04-26 09:03:47.281175] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.067 [2024-04-26 09:03:47.281187] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.067 [2024-04-26 09:03:47.281196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.067 [2024-04-26 09:03:47.283856] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.067 [2024-04-26 09:03:47.292501] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.067 [2024-04-26 09:03:47.293195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.293621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.293634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-04-26 09:03:47.293645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.067 [2024-04-26 09:03:47.293814] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.067 [2024-04-26 09:03:47.293984] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.067 [2024-04-26 09:03:47.293995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.067 [2024-04-26 09:03:47.294004] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.067 [2024-04-26 09:03:47.296665] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.067 [2024-04-26 09:03:47.305440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.067 [2024-04-26 09:03:47.306138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.306611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.067 [2024-04-26 09:03:47.306625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.067 [2024-04-26 09:03:47.306635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.067 [2024-04-26 09:03:47.306804] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.068 [2024-04-26 09:03:47.306974] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.068 [2024-04-26 09:03:47.306985] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.068 [2024-04-26 09:03:47.306994] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.068 [2024-04-26 09:03:47.309651] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.328 [2024-04-26 09:03:47.318437] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.328 [2024-04-26 09:03:47.319142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.319561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.319574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.328 [2024-04-26 09:03:47.319584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.328 [2024-04-26 09:03:47.319754] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.328 [2024-04-26 09:03:47.319924] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.328 [2024-04-26 09:03:47.319935] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.328 [2024-04-26 09:03:47.319944] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.328 [2024-04-26 09:03:47.322597] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.328 [2024-04-26 09:03:47.331367] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.328 [2024-04-26 09:03:47.332057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.332491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.332505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.328 [2024-04-26 09:03:47.332515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.328 [2024-04-26 09:03:47.332685] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.328 [2024-04-26 09:03:47.332855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.328 [2024-04-26 09:03:47.332866] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.328 [2024-04-26 09:03:47.332875] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.328 [2024-04-26 09:03:47.335537] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.328 [2024-04-26 09:03:47.344314] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.328 [2024-04-26 09:03:47.345008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.345420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.345434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.328 [2024-04-26 09:03:47.345444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.328 [2024-04-26 09:03:47.345618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.328 [2024-04-26 09:03:47.345788] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.328 [2024-04-26 09:03:47.345799] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.328 [2024-04-26 09:03:47.345809] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.328 [2024-04-26 09:03:47.348467] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.328 [2024-04-26 09:03:47.357244] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.328 [2024-04-26 09:03:47.357916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.358350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.358364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.328 [2024-04-26 09:03:47.358373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.328 [2024-04-26 09:03:47.358547] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.328 [2024-04-26 09:03:47.358718] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.328 [2024-04-26 09:03:47.358729] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.328 [2024-04-26 09:03:47.358738] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.328 [2024-04-26 09:03:47.361391] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.328 [2024-04-26 09:03:47.370156] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.328 [2024-04-26 09:03:47.370850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.371300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.371313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.328 [2024-04-26 09:03:47.371323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.328 [2024-04-26 09:03:47.371496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.328 [2024-04-26 09:03:47.371667] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.328 [2024-04-26 09:03:47.371678] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.328 [2024-04-26 09:03:47.371687] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.328 [2024-04-26 09:03:47.374337] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.328 [2024-04-26 09:03:47.383131] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.328 [2024-04-26 09:03:47.383828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.384250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.384264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.328 [2024-04-26 09:03:47.384274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.328 [2024-04-26 09:03:47.384445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.328 [2024-04-26 09:03:47.384620] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.328 [2024-04-26 09:03:47.384631] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.328 [2024-04-26 09:03:47.384640] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.328 [2024-04-26 09:03:47.387298] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.328 [2024-04-26 09:03:47.396088] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.328 [2024-04-26 09:03:47.396697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.397120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.397133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.328 [2024-04-26 09:03:47.397146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.328 [2024-04-26 09:03:47.397316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.328 [2024-04-26 09:03:47.397490] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.328 [2024-04-26 09:03:47.397503] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.328 [2024-04-26 09:03:47.397513] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.328 [2024-04-26 09:03:47.400168] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.328 [2024-04-26 09:03:47.408997] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.328 [2024-04-26 09:03:47.409670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.410061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-04-26 09:03:47.410075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.328 [2024-04-26 09:03:47.410084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.328 [2024-04-26 09:03:47.410255] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.328 [2024-04-26 09:03:47.410425] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.328 [2024-04-26 09:03:47.410436] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.328 [2024-04-26 09:03:47.410445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.328 [2024-04-26 09:03:47.413102] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.329 [2024-04-26 09:03:47.421901] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.329 [2024-04-26 09:03:47.422329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.422845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.422859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.329 [2024-04-26 09:03:47.422869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.329 [2024-04-26 09:03:47.423038] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.329 [2024-04-26 09:03:47.423208] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.329 [2024-04-26 09:03:47.423220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.329 [2024-04-26 09:03:47.423229] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.329 [2024-04-26 09:03:47.425891] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.329 [2024-04-26 09:03:47.434833] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.329 [2024-04-26 09:03:47.435408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.435835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.435849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.329 [2024-04-26 09:03:47.435859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.329 [2024-04-26 09:03:47.436033] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.329 [2024-04-26 09:03:47.436204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.329 [2024-04-26 09:03:47.436215] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.329 [2024-04-26 09:03:47.436225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.329 [2024-04-26 09:03:47.438888] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.329 [2024-04-26 09:03:47.447801] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.329 [2024-04-26 09:03:47.448492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.448990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.449004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.329 [2024-04-26 09:03:47.449014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.329 [2024-04-26 09:03:47.449183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.329 [2024-04-26 09:03:47.449352] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.329 [2024-04-26 09:03:47.449364] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.329 [2024-04-26 09:03:47.449373] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.329 [2024-04-26 09:03:47.452034] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.329 [2024-04-26 09:03:47.460660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.329 [2024-04-26 09:03:47.461353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.461850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.461864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.329 [2024-04-26 09:03:47.461873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.329 [2024-04-26 09:03:47.462043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.329 [2024-04-26 09:03:47.462213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.329 [2024-04-26 09:03:47.462225] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.329 [2024-04-26 09:03:47.462234] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.329 [2024-04-26 09:03:47.464894] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.329 [2024-04-26 09:03:47.473508] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.329 [2024-04-26 09:03:47.474134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.474563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.474577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.329 [2024-04-26 09:03:47.474587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.329 [2024-04-26 09:03:47.474757] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.329 [2024-04-26 09:03:47.474931] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.329 [2024-04-26 09:03:47.474943] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.329 [2024-04-26 09:03:47.474952] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.329 [2024-04-26 09:03:47.477613] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.329 [2024-04-26 09:03:47.486384] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.329 [2024-04-26 09:03:47.487077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.487503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.487517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.329 [2024-04-26 09:03:47.487527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.329 [2024-04-26 09:03:47.487697] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.329 [2024-04-26 09:03:47.487868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.329 [2024-04-26 09:03:47.487880] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.329 [2024-04-26 09:03:47.487891] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.329 [2024-04-26 09:03:47.490551] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.329 [2024-04-26 09:03:47.499333] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.329 [2024-04-26 09:03:47.500030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.500476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-04-26 09:03:47.500489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.329 [2024-04-26 09:03:47.500500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.329 [2024-04-26 09:03:47.500669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.329 [2024-04-26 09:03:47.500839] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.329 [2024-04-26 09:03:47.500851] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.329 [2024-04-26 09:03:47.500860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.329 [2024-04-26 09:03:47.503519] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.330 [2024-04-26 09:03:47.512297] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.330 [2024-04-26 09:03:47.512988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-04-26 09:03:47.513482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-04-26 09:03:47.513497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.330 [2024-04-26 09:03:47.513508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.330 [2024-04-26 09:03:47.513679] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.330 [2024-04-26 09:03:47.513850] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.330 [2024-04-26 09:03:47.513864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.330 [2024-04-26 09:03:47.513873] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.330 [2024-04-26 09:03:47.516530] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.330 [2024-04-26 09:03:47.525297] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.330 [2024-04-26 09:03:47.525744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-04-26 09:03:47.526247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-04-26 09:03:47.526261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.330 [2024-04-26 09:03:47.526271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.330 [2024-04-26 09:03:47.526441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.330 [2024-04-26 09:03:47.526616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.330 [2024-04-26 09:03:47.526628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.330 [2024-04-26 09:03:47.526637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.330 [2024-04-26 09:03:47.529295] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.330 [2024-04-26 09:03:47.538225] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.330 [2024-04-26 09:03:47.538909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-04-26 09:03:47.539282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-04-26 09:03:47.539296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.330 [2024-04-26 09:03:47.539306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.330 [2024-04-26 09:03:47.539480] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.330 [2024-04-26 09:03:47.539651] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.330 [2024-04-26 09:03:47.539663] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.330 [2024-04-26 09:03:47.539672] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.330 [2024-04-26 09:03:47.542325] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.330 [2024-04-26 09:03:47.551112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.330 [2024-04-26 09:03:47.551787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-04-26 09:03:47.552283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-04-26 09:03:47.552296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.330 [2024-04-26 09:03:47.552307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.330 [2024-04-26 09:03:47.552481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.330 [2024-04-26 09:03:47.552652] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.330 [2024-04-26 09:03:47.552663] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.330 [2024-04-26 09:03:47.552675] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.330 [2024-04-26 09:03:47.555324] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.330 [2024-04-26 09:03:47.564100] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.330 [2024-04-26 09:03:47.564801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-04-26 09:03:47.565295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-04-26 09:03:47.565308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.330 [2024-04-26 09:03:47.565318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.330 [2024-04-26 09:03:47.565494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.330 [2024-04-26 09:03:47.565664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.330 [2024-04-26 09:03:47.565676] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.330 [2024-04-26 09:03:47.565686] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.330 [2024-04-26 09:03:47.568338] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.591 [2024-04-26 09:03:47.576968] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.591 [2024-04-26 09:03:47.577666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.578172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.578186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.591 [2024-04-26 09:03:47.578195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.591 [2024-04-26 09:03:47.578365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.591 [2024-04-26 09:03:47.578540] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.591 [2024-04-26 09:03:47.578551] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.591 [2024-04-26 09:03:47.578560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.591 [2024-04-26 09:03:47.581219] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.591 [2024-04-26 09:03:47.589838] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.591 [2024-04-26 09:03:47.590533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.591010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.591023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.591 [2024-04-26 09:03:47.591033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.591 [2024-04-26 09:03:47.591202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.591 [2024-04-26 09:03:47.591371] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.591 [2024-04-26 09:03:47.591383] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.591 [2024-04-26 09:03:47.591392] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.591 [2024-04-26 09:03:47.594051] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.591 [2024-04-26 09:03:47.602700] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.591 [2024-04-26 09:03:47.603376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.603791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.603805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.591 [2024-04-26 09:03:47.603815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.591 [2024-04-26 09:03:47.603985] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.591 [2024-04-26 09:03:47.604155] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.591 [2024-04-26 09:03:47.604167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.591 [2024-04-26 09:03:47.604176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.591 [2024-04-26 09:03:47.606837] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.591 [2024-04-26 09:03:47.615634] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.591 [2024-04-26 09:03:47.616289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.616796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.616812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.591 [2024-04-26 09:03:47.616822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.591 [2024-04-26 09:03:47.616994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.591 [2024-04-26 09:03:47.617164] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.591 [2024-04-26 09:03:47.617176] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.591 [2024-04-26 09:03:47.617185] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.591 [2024-04-26 09:03:47.619850] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.591 [2024-04-26 09:03:47.628646] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.591 [2024-04-26 09:03:47.629321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.629846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.629861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.591 [2024-04-26 09:03:47.629871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.591 [2024-04-26 09:03:47.630037] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.591 [2024-04-26 09:03:47.630202] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.591 [2024-04-26 09:03:47.630214] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.591 [2024-04-26 09:03:47.630224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.591 [2024-04-26 09:03:47.632894] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.591 [2024-04-26 09:03:47.641533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.591 [2024-04-26 09:03:47.642208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.642634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.642649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.591 [2024-04-26 09:03:47.642659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.591 [2024-04-26 09:03:47.642829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.591 [2024-04-26 09:03:47.643000] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.591 [2024-04-26 09:03:47.643011] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.591 [2024-04-26 09:03:47.643021] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.591 [2024-04-26 09:03:47.645681] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.591 [2024-04-26 09:03:47.654462] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.591 [2024-04-26 09:03:47.655160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.655637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.655651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.591 [2024-04-26 09:03:47.655662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.591 [2024-04-26 09:03:47.655832] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.591 [2024-04-26 09:03:47.656004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.591 [2024-04-26 09:03:47.656016] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.591 [2024-04-26 09:03:47.656026] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.591 [2024-04-26 09:03:47.658687] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.591 [2024-04-26 09:03:47.667316] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.591 [2024-04-26 09:03:47.667996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.668422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.668435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.591 [2024-04-26 09:03:47.668445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.591 [2024-04-26 09:03:47.668619] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.591 [2024-04-26 09:03:47.668788] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.591 [2024-04-26 09:03:47.668800] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.591 [2024-04-26 09:03:47.668809] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.591 [2024-04-26 09:03:47.671478] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.591 [2024-04-26 09:03:47.680258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.591 [2024-04-26 09:03:47.680889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.681272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.591 [2024-04-26 09:03:47.681286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.591 [2024-04-26 09:03:47.681296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.591 [2024-04-26 09:03:47.681472] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.591 [2024-04-26 09:03:47.681642] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.591 [2024-04-26 09:03:47.681654] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.591 [2024-04-26 09:03:47.681664] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.591 [2024-04-26 09:03:47.684325] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.592 [2024-04-26 09:03:47.693136] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.592 [2024-04-26 09:03:47.693745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.694163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.694176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.592 [2024-04-26 09:03:47.694187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.592 [2024-04-26 09:03:47.694356] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.592 [2024-04-26 09:03:47.694531] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.592 [2024-04-26 09:03:47.694543] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.592 [2024-04-26 09:03:47.694553] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.592 [2024-04-26 09:03:47.697206] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.592 [2024-04-26 09:03:47.706148] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.592 [2024-04-26 09:03:47.706793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.707221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.707235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.592 [2024-04-26 09:03:47.707245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.592 [2024-04-26 09:03:47.707416] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.592 [2024-04-26 09:03:47.707591] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.592 [2024-04-26 09:03:47.707603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.592 [2024-04-26 09:03:47.707612] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.592 [2024-04-26 09:03:47.710267] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.592 [2024-04-26 09:03:47.719071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.592 [2024-04-26 09:03:47.719676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.720083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.720096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.592 [2024-04-26 09:03:47.720106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.592 [2024-04-26 09:03:47.720276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.592 [2024-04-26 09:03:47.720446] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.592 [2024-04-26 09:03:47.720463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.592 [2024-04-26 09:03:47.720473] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.592 [2024-04-26 09:03:47.723122] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.592 09:03:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:30.592 09:03:47 -- common/autotest_common.sh@850 -- # return 0 00:29:30.592 09:03:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:30.592 09:03:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:30.592 09:03:47 -- common/autotest_common.sh@10 -- # set +x 00:29:30.592 [2024-04-26 09:03:47.732085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.592 [2024-04-26 09:03:47.732428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.732816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.732830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.592 [2024-04-26 09:03:47.732840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.592 [2024-04-26 09:03:47.733010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.592 [2024-04-26 09:03:47.733181] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.592 [2024-04-26 09:03:47.733193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.592 [2024-04-26 09:03:47.733202] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.592 [2024-04-26 09:03:47.735864] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.592 [2024-04-26 09:03:47.744955] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.592 [2024-04-26 09:03:47.745309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.745697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.745711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.592 [2024-04-26 09:03:47.745721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.592 [2024-04-26 09:03:47.745891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.592 [2024-04-26 09:03:47.746061] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.592 [2024-04-26 09:03:47.746073] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.592 [2024-04-26 09:03:47.746083] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.592 [2024-04-26 09:03:47.748747] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.592 [2024-04-26 09:03:47.757847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.592 [2024-04-26 09:03:47.758504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.758941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.758954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.592 [2024-04-26 09:03:47.758964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.592 [2024-04-26 09:03:47.759135] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.592 [2024-04-26 09:03:47.759306] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.592 [2024-04-26 09:03:47.759317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.592 [2024-04-26 09:03:47.759327] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.592 [2024-04-26 09:03:47.761986] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.592 [2024-04-26 09:03:47.770779] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.592 [2024-04-26 09:03:47.771380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.771772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.771786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.592 [2024-04-26 09:03:47.771796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.592 [2024-04-26 09:03:47.771967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.592 09:03:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.592 [2024-04-26 09:03:47.772138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.592 [2024-04-26 09:03:47.772151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.592 [2024-04-26 09:03:47.772160] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.592 09:03:47 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:30.592 09:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.592 09:03:47 -- common/autotest_common.sh@10 -- # set +x 00:29:30.592 [2024-04-26 09:03:47.774820] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.592 [2024-04-26 09:03:47.776414] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.592 09:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.592 09:03:47 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:30.592 09:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.592 09:03:47 -- common/autotest_common.sh@10 -- # set +x 00:29:30.592 [2024-04-26 09:03:47.783764] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.592 [2024-04-26 09:03:47.784461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.784891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.784905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.592 [2024-04-26 09:03:47.784915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.592 [2024-04-26 09:03:47.785086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.592 [2024-04-26 09:03:47.785257] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.592 [2024-04-26 09:03:47.785272] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.592 [2024-04-26 09:03:47.785281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.592 [2024-04-26 09:03:47.787939] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.592 [2024-04-26 09:03:47.796728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.592 [2024-04-26 09:03:47.797394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.592 [2024-04-26 09:03:47.797759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.593 [2024-04-26 09:03:47.797773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.593 [2024-04-26 09:03:47.797783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.593 [2024-04-26 09:03:47.797954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.593 [2024-04-26 09:03:47.798125] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.593 [2024-04-26 09:03:47.798137] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.593 [2024-04-26 09:03:47.798145] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.593 [2024-04-26 09:03:47.800814] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.593 [2024-04-26 09:03:47.809593] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.593 [2024-04-26 09:03:47.810263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.593 [2024-04-26 09:03:47.810482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.593 [2024-04-26 09:03:47.810496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.593 [2024-04-26 09:03:47.810507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.593 [2024-04-26 09:03:47.810678] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.593 [2024-04-26 09:03:47.810849] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.593 [2024-04-26 09:03:47.810861] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.593 [2024-04-26 09:03:47.810872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.593 [2024-04-26 09:03:47.813537] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.593 Malloc0 00:29:30.593 09:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.593 09:03:47 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:30.593 09:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.593 09:03:47 -- common/autotest_common.sh@10 -- # set +x 00:29:30.593 [2024-04-26 09:03:47.822492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.593 [2024-04-26 09:03:47.823052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.593 [2024-04-26 09:03:47.823198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.593 [2024-04-26 09:03:47.823211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.593 [2024-04-26 09:03:47.823222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.593 [2024-04-26 09:03:47.823392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.593 [2024-04-26 09:03:47.823573] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.593 [2024-04-26 09:03:47.823585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.593 [2024-04-26 09:03:47.823594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.593 [2024-04-26 09:03:47.826255] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.593 09:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.593 09:03:47 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:30.593 09:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.593 09:03:47 -- common/autotest_common.sh@10 -- # set +x 00:29:30.593 [2024-04-26 09:03:47.835346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.593 [2024-04-26 09:03:47.835902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.593 [2024-04-26 09:03:47.836333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.593 [2024-04-26 09:03:47.836347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e32b80 with addr=10.0.0.2, port=4420 00:29:30.593 [2024-04-26 09:03:47.836357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32b80 is same with the state(5) to be set 00:29:30.852 [2024-04-26 09:03:47.836532] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e32b80 (9): Bad file descriptor 00:29:30.852 [2024-04-26 09:03:47.836703] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.852 [2024-04-26 09:03:47.836714] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.852 [2024-04-26 09:03:47.836723] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.852 [2024-04-26 09:03:47.839384] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.852 09:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.852 09:03:47 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.852 09:03:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.852 09:03:47 -- common/autotest_common.sh@10 -- # set +x 00:29:30.852 [2024-04-26 09:03:47.844344] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.852 [2024-04-26 09:03:47.848329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.852 09:03:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.852 09:03:47 -- host/bdevperf.sh@38 -- # wait 2230946 00:29:30.852 [2024-04-26 09:03:47.875113] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:40.828 00:29:40.828 Latency(us) 00:29:40.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.828 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:40.828 Verification LBA range: start 0x0 length 0x4000 00:29:40.828 Nvme1n1 : 15.01 8383.71 32.75 12456.88 0.00 6122.69 1238.63 29779.56 00:29:40.828 =================================================================================================================== 00:29:40.828 Total : 8383.71 32.75 12456.88 0.00 6122.69 1238.63 29779.56 00:29:40.828 09:03:56 -- host/bdevperf.sh@39 -- # sync 00:29:40.828 09:03:56 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:40.828 09:03:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:40.828 09:03:56 -- common/autotest_common.sh@10 -- # set +x 00:29:40.828 09:03:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:40.828 09:03:56 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:40.828 09:03:56 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:40.828 09:03:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:40.828 09:03:56 -- nvmf/common.sh@117 -- # sync 00:29:40.828 09:03:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:40.828 09:03:56 -- nvmf/common.sh@120 -- # set +e 00:29:40.828 09:03:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:40.828 09:03:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:40.828 rmmod nvme_tcp 00:29:40.828 rmmod nvme_fabrics 00:29:40.828 rmmod nvme_keyring 00:29:40.828 09:03:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:40.828 09:03:56 -- nvmf/common.sh@124 -- # set -e 00:29:40.828 09:03:56 -- nvmf/common.sh@125 -- # return 0 00:29:40.828 09:03:56 -- nvmf/common.sh@478 -- # '[' -n 2232011 ']' 00:29:40.828 09:03:56 -- nvmf/common.sh@479 -- # killprocess 2232011 00:29:40.828 09:03:56 -- common/autotest_common.sh@936 -- # '[' -z 2232011 ']' 00:29:40.828 09:03:56 -- common/autotest_common.sh@940 -- # kill -0 2232011 00:29:40.828 09:03:56 -- common/autotest_common.sh@941 -- # uname 00:29:40.828 09:03:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:40.828 09:03:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2232011 00:29:40.828 09:03:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:40.828 09:03:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:40.828 09:03:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2232011' 00:29:40.828 killing process with pid 2232011 00:29:40.828 09:03:56 -- common/autotest_common.sh@955 -- # kill 2232011 00:29:40.828 09:03:56 -- common/autotest_common.sh@960 -- # wait 2232011 00:29:40.828 09:03:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:40.828 09:03:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:40.828 09:03:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:40.828 09:03:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:40.828 09:03:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:40.828 09:03:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.828 09:03:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:40.828 09:03:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.764 09:03:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:41.764 00:29:41.764 real 0m27.464s 00:29:41.764 user 1m2.940s 00:29:41.764 sys 0m7.887s 00:29:41.764 09:03:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:41.764 09:03:58 -- common/autotest_common.sh@10 -- # set +x 00:29:41.764 ************************************ 00:29:41.764 END TEST nvmf_bdevperf 00:29:41.764 ************************************ 00:29:42.022 09:03:59 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:42.022 09:03:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:42.022 09:03:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:42.022 09:03:59 -- common/autotest_common.sh@10 -- # set +x 00:29:42.022 ************************************ 00:29:42.022 START TEST nvmf_target_disconnect 00:29:42.022 ************************************ 00:29:42.022 09:03:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:42.281 * Looking for test storage... 00:29:42.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:42.281 09:03:59 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.281 09:03:59 -- nvmf/common.sh@7 -- # uname -s 00:29:42.281 09:03:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.281 09:03:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.281 09:03:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.281 09:03:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.281 09:03:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.281 09:03:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.281 09:03:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.281 09:03:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.281 09:03:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.281 09:03:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.281 09:03:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:42.281 09:03:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:42.281 09:03:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.281 09:03:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.281 09:03:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.281 09:03:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.281 09:03:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.281 09:03:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.281 09:03:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.281 09:03:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.281 09:03:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.281 09:03:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.281 09:03:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.281 09:03:59 -- paths/export.sh@5 -- # export PATH 00:29:42.281 09:03:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.281 09:03:59 -- nvmf/common.sh@47 -- # : 0 00:29:42.281 09:03:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:42.281 09:03:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:42.281 09:03:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.281 09:03:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.281 09:03:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.281 09:03:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:42.281 09:03:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:42.281 09:03:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:42.281 09:03:59 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:42.281 09:03:59 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:42.281 09:03:59 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:42.281 09:03:59 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:42.281 09:03:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:42.281 09:03:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.281 09:03:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:42.281 09:03:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:42.281 09:03:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:42.281 09:03:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.281 09:03:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:42.281 09:03:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.281 09:03:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:29:42.281 09:03:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:29:42.281 09:03:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:42.281 09:03:59 -- common/autotest_common.sh@10 -- # set +x 00:29:48.867 09:04:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:48.867 09:04:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:48.867 09:04:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:48.867 09:04:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:48.867 09:04:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:48.867 09:04:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:48.867 09:04:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:48.867 09:04:05 -- nvmf/common.sh@295 -- # net_devs=() 00:29:48.867 09:04:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:48.867 09:04:05 -- nvmf/common.sh@296 -- # e810=() 00:29:48.867 09:04:05 -- nvmf/common.sh@296 -- # local -ga e810 00:29:48.867 09:04:05 -- nvmf/common.sh@297 -- # x722=() 00:29:48.867 09:04:05 -- nvmf/common.sh@297 -- # local -ga x722 00:29:48.867 09:04:05 -- nvmf/common.sh@298 -- # mlx=() 00:29:48.867 09:04:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:48.867 09:04:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.867 09:04:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.867 09:04:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.867 09:04:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.867 09:04:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.867 09:04:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.867 09:04:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.867 09:04:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.867 09:04:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.867 09:04:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.867 09:04:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.867 09:04:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:48.867 09:04:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:48.867 09:04:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:48.867 09:04:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.867 09:04:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:48.867 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:48.867 09:04:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.867 09:04:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:48.867 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:48.867 09:04:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:48.867 09:04:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.867 09:04:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.867 09:04:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:48.867 09:04:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.867 09:04:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:48.867 Found net devices under 0000:af:00.0: cvl_0_0 00:29:48.867 09:04:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.867 09:04:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.867 09:04:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.867 09:04:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:29:48.867 09:04:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.867 09:04:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:48.867 Found net devices under 0000:af:00.1: cvl_0_1 00:29:48.867 09:04:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.867 09:04:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:29:48.867 09:04:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:29:48.867 09:04:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:29:48.867 09:04:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:29:48.867 09:04:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.867 09:04:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.867 09:04:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.867 09:04:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:48.867 09:04:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.867 09:04:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.867 09:04:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:48.867 09:04:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.867 09:04:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.867 09:04:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:48.867 09:04:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:48.867 09:04:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.867 09:04:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.126 09:04:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.126 09:04:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.126 09:04:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:49.126 09:04:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.126 09:04:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.126 09:04:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.126 09:04:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:49.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:29:49.126 00:29:49.126 --- 10.0.0.2 ping statistics --- 00:29:49.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.126 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:29:49.126 09:04:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:29:49.126 00:29:49.126 --- 10.0.0.1 ping statistics --- 00:29:49.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.126 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:29:49.126 09:04:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.126 09:04:06 -- nvmf/common.sh@411 -- # return 0 00:29:49.126 09:04:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:49.126 09:04:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.126 09:04:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:49.126 09:04:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:49.126 09:04:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.126 09:04:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:49.126 09:04:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:49.126 09:04:06 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:49.126 09:04:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:49.126 09:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:49.126 09:04:06 -- common/autotest_common.sh@10 -- # set +x 00:29:49.385 ************************************ 00:29:49.385 START TEST nvmf_target_disconnect_tc1 00:29:49.385 ************************************ 00:29:49.385 09:04:06 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:29:49.385 09:04:06 -- host/target_disconnect.sh@32 -- # set +e 00:29:49.385 09:04:06 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:49.385 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.385 [2024-04-26 09:04:06.600922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.385 [2024-04-26 09:04:06.601583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.385 [2024-04-26 09:04:06.601642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4d6c0 with addr=10.0.0.2, port=4420 00:29:49.385 [2024-04-26 09:04:06.601712] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:49.385 [2024-04-26 09:04:06.601763] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:49.385 [2024-04-26 09:04:06.601791] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:49.385 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:49.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:49.385 Initializing NVMe Controllers 00:29:49.385 09:04:06 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:49.385 09:04:06 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:49.385 09:04:06 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:29:49.385 09:04:06 -- common/autotest_common.sh@1139 -- # return 0 00:29:49.385 09:04:06 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:49.385 09:04:06 -- host/target_disconnect.sh@41 -- # set -e 00:29:49.385 00:29:49.385 real 0m0.112s 00:29:49.385 user 0m0.044s 00:29:49.385 sys 0m0.067s 00:29:49.385 09:04:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:49.385 09:04:06 -- common/autotest_common.sh@10 -- # set +x 00:29:49.385 ************************************ 00:29:49.385 END TEST nvmf_target_disconnect_tc1 00:29:49.385 ************************************ 00:29:49.643 09:04:06 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:49.643 09:04:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:49.643 09:04:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:49.643 09:04:06 -- common/autotest_common.sh@10 -- # set +x 00:29:49.643 ************************************ 00:29:49.643 START TEST nvmf_target_disconnect_tc2 00:29:49.643 ************************************ 00:29:49.643 09:04:06 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:29:49.643 09:04:06 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:29:49.643 09:04:06 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:49.643 09:04:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:49.643 09:04:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:49.644 09:04:06 -- common/autotest_common.sh@10 -- # set +x 00:29:49.644 09:04:06 -- nvmf/common.sh@470 -- # nvmfpid=2237380 00:29:49.644 09:04:06 -- nvmf/common.sh@471 -- # waitforlisten 2237380 00:29:49.644 09:04:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:49.644 09:04:06 -- common/autotest_common.sh@817 -- # '[' -z 2237380 ']' 00:29:49.644 09:04:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.644 09:04:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:49.644 09:04:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.644 09:04:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:49.644 09:04:06 -- common/autotest_common.sh@10 -- # set +x 00:29:49.902 [2024-04-26 09:04:06.895116] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:29:49.902 [2024-04-26 09:04:06.895162] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.902 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.902 [2024-04-26 09:04:06.987228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:49.902 [2024-04-26 09:04:07.059184] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.902 [2024-04-26 09:04:07.059224] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.902 [2024-04-26 09:04:07.059234] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.902 [2024-04-26 09:04:07.059243] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.902 [2024-04-26 09:04:07.059251] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.902 [2024-04-26 09:04:07.059374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:49.902 [2024-04-26 09:04:07.059496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:49.902 [2024-04-26 09:04:07.059583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:49.902 [2024-04-26 09:04:07.059584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:50.469 09:04:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:50.469 09:04:07 -- common/autotest_common.sh@850 -- # return 0 00:29:50.469 09:04:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:50.469 09:04:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:50.469 09:04:07 -- common/autotest_common.sh@10 -- # set +x 00:29:50.728 09:04:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.728 09:04:07 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:50.728 09:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.728 09:04:07 -- common/autotest_common.sh@10 -- # set +x 00:29:50.728 Malloc0 00:29:50.728 09:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.728 09:04:07 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:50.728 09:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.728 09:04:07 -- common/autotest_common.sh@10 -- # set +x 00:29:50.728 [2024-04-26 09:04:07.760181] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.728 09:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.728 09:04:07 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.728 09:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.728 09:04:07 -- common/autotest_common.sh@10 -- # set +x 00:29:50.728 09:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.728 09:04:07 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:50.728 09:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.728 09:04:07 -- common/autotest_common.sh@10 -- # set +x 00:29:50.728 09:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.728 09:04:07 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.728 09:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.728 09:04:07 -- common/autotest_common.sh@10 -- # set +x 00:29:50.728 [2024-04-26 09:04:07.796503] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.728 09:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.728 09:04:07 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:50.728 09:04:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:50.728 09:04:07 -- common/autotest_common.sh@10 -- # set +x 00:29:50.728 09:04:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:50.728 09:04:07 -- host/target_disconnect.sh@50 -- # reconnectpid=2237643 00:29:50.728 09:04:07 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:50.728 09:04:07 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.728 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.634 09:04:09 -- host/target_disconnect.sh@53 -- # kill -9 2237380 00:29:52.634 09:04:09 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 [2024-04-26 09:04:09.827495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 [2024-04-26 09:04:09.827724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Write completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 [2024-04-26 09:04:09.827946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.634 starting I/O failed 00:29:52.634 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Read completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 Write completed with error (sct=0, sc=8) 00:29:52.635 starting I/O failed 00:29:52.635 [2024-04-26 09:04:09.828165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:52.635 [2024-04-26 09:04:09.828721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.829288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.829329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.829953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.830433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.830484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.831056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.831566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.831606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.832093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.832599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.832639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.833218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.833770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.833810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.834352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.834831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.834871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.835376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.835862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.835901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.836471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.837005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.837043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.837603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.838110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.838149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.838647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.839112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.839130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.839657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.840219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.840236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.840767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.841116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.841134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.841646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.842129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.842147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.842575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.843067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.843084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.843572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.843993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.844033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.844591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.845133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.845153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.845530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.846038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.846055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.846597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.847117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.847156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.847726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.848248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.848287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.848891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.849402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.849419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.849967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.850427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.850478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-04-26 09:04:09.851039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-04-26 09:04:09.851440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.851490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.852009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.852586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.852626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.853261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.853758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.853796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.854275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.854683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.854723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.855265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.855848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.855889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.856464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.857011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.857050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.857630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.858142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.858159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.858662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.859099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.859137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.859713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.860189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.860228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.860734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.861260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.861278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.861769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.862222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.862261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.862858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.863409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.863427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.863899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.864381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.864420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.864944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.865565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.865583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.865962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.866554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.866594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.867183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.867728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.867767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.868258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.868732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.868771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.869255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.869728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.869767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.870332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.870828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.870867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.871471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.871949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.871988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.872542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.872989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.873028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.873581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.874067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.874106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.874665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.875189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.875211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.875655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.876087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.876126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.876566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.877095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.877135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.877724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.878259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.878299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-04-26 09:04:09.878922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.879521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-04-26 09:04:09.879543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.900 [2024-04-26 09:04:09.879927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.900 [2024-04-26 09:04:09.880430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.900 [2024-04-26 09:04:09.880446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.900 qpair failed and we were unable to recover it. 00:29:52.900 [2024-04-26 09:04:09.880857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.900 [2024-04-26 09:04:09.881339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.900 [2024-04-26 09:04:09.881378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.900 qpair failed and we were unable to recover it. 00:29:52.900 [2024-04-26 09:04:09.881891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.900 [2024-04-26 09:04:09.882443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.882496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.883068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.883596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.883637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.884062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.884547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.884587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.885124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.885581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.885623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.886122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.886561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.886581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.887005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.887476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.887517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.888030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.888537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.888577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.889150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.889680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.889720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.890206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.890660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.890677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.891141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.891665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.891706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.892219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.892795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.892835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.893387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.893875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.893915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.894489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.895001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.895040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.895601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.896126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.896165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.896727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.897277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.897316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.897873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.898461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.898501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.899003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.899508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.899548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.900072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.900540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.900579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.901114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.901597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.901637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.902192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.902712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.902752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.903096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.903572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.903612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.904114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.904581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.904621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.905122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.905700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.905739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.906247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.906775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.906793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.907228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.907629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.907670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.908173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.908660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.908700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.909205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.909716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.909734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.910185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.910650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.910691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.911293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.911765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.911806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.912252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.912799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.912844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.913422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.913927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.901 [2024-04-26 09:04:09.913969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.901 qpair failed and we were unable to recover it. 00:29:52.901 [2024-04-26 09:04:09.914544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.915121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.915160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.915581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.916166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.916185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.916571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.917065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.917082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.917533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.917958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.917997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.918493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.919063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.919102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.919671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.920227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.920266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.920695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.921197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.921236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.921844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.922363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.922402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.922889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.923470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.923512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.923939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.924440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.924493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.924989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.925550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.925591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.926168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.926724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.926765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.927346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.927898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.927939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.928542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.929037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.929076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.929642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.930166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.930205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.930764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.931345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.931384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.931942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.932467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.932486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.933040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.933553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.933594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.934160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.934648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.934689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.935248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.935786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.935827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.936409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.936924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.936964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.937534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.938085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.938130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.938717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.939241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.939280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.939858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.940286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.940326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.940815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.941285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.941324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.941850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.942269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.942308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.942751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.943279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.943318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.943833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.944338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.944378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.944902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.945467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.945508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.945994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.946552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.946593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.902 [2024-04-26 09:04:09.947121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.947672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.902 [2024-04-26 09:04:09.947712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.902 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.948211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.948762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.948808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.949372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.949918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.949959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.950523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.951020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.951059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.951654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.952257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.952295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.952804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.953333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.953372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.953989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.954544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.954586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.955094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.955668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.955709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.956184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.956738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.956779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.957153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.957714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.957755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.958353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.958907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.958949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.959557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.960120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.960166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.960719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.961211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.961250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.961803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.962359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.962398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.962903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.963440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.963493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.964087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.964603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.964645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.965208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.965775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.965816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.966375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.966912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.966952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.967500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.968075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.968114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.968682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.969173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.969212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.969660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.970099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.970117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.970549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.970963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.971008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.971500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.972055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.972094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.972659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.973064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.973104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.973670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.974169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.974187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.974704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.975215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.975254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.975842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.976402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.976440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.977043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.977601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.977642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.978063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.978473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.978515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.979002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.979556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.979597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.980158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.980618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.980636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.903 [2024-04-26 09:04:09.981098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.981570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.903 [2024-04-26 09:04:09.981611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.903 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.982093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.982577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.982644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.983132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.983691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.983731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.984276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.984807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.984857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.985438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.986003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.986044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.986323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.986851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.986870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.987411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.987973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.988014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.988513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.989001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.989040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.989591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.990091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.990130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.990703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.991257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.991296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.991836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.992394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.992433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.993023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.993519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.993560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.994109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.994586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.994627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.995209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.995747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.995787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.996280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.996827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.996868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.997447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.997922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.997962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.998463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.998891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.998930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:09.999473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.999958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:09.999997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:10.000399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.000909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.000928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:10.001436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.001937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.001956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:10.002395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.002878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.002896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:10.003438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.003808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.003824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:10.004260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.004680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.004698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:10.005152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.005664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.005681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:10.006118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.006612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.006630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:10.007091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.007526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.904 [2024-04-26 09:04:10.007544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.904 qpair failed and we were unable to recover it. 00:29:52.904 [2024-04-26 09:04:10.007916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.008417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.008434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.008939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.009456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.009475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.009980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.010495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.010513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.011011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.011513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.011531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.012015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.012540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.012557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.012935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.013366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.013383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.013889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.014263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.014280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.014782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.015230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.015247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.015755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.016174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.016192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.016640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.017078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.017095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.017575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.018054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.018071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.018506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.018951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.018969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.019463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.019839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.019859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.020254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.020690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.020707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.021140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.021542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.021559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.022072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.022443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.022468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.022842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.023286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.023303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.023736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.024206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.024224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.024664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.025149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.025167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.025636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.026019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.026036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.026486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.026705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.026723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.027182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.027684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.027702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.905 qpair failed and we were unable to recover it. 00:29:52.905 [2024-04-26 09:04:10.028059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.905 [2024-04-26 09:04:10.028537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.028555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.028933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.029309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.029326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.029831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.030273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.030291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.030732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.031150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.031167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.031541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.032019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.032036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.032520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.032948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.032965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.033472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.033902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.033920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.034368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.034851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.034869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.035319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.035754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.035772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.036267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.036636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.036652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.037114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.037456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.037474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.037889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.038393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.038411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.038648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.039075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.039092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.039518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.039949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.039966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.040337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.040699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.040717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.041222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.041725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.041743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.042108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.042635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.042653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.043015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.043519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.043537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.044046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.044405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.044423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.044775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.045219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.045236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.045743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.046058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.046075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.046492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.046913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.046931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.047411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.047645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.047662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.048090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.048525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.048543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.048981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.049403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.049420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.049646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.050073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.050091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.050591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.050955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.050973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.051408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.051832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.051850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.906 qpair failed and we were unable to recover it. 00:29:52.906 [2024-04-26 09:04:10.052337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.052536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.906 [2024-04-26 09:04:10.052554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.052948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.053139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.053155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.053572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.053911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.053928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.054351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.054701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.054719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.055206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.055621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.055639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.056059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.056240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.056257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.056679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.057035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.057052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.057427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.057658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.057676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.058179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.058628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.058645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.059067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.059592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.059610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.060021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.060321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.060338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.060839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.061317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.061334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.061768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.062217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.062234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.062643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.063111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.063128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.063554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.063769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.063786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.064295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.064672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.064689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.065139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.065581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.065599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.065995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.066358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.066376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.066855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.067318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.067335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.067730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.068109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.068126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.068613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.069033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.069050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.069430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.069876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.069893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.070400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.070765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.070782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.071149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.071645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.071662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.072236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.072580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.072597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.073028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.073477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.073495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.073999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.074350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.074367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.074776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.075178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.075196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.907 [2024-04-26 09:04:10.075568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.075927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.907 [2024-04-26 09:04:10.075945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.907 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.076448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.076753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.076770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.077205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.077705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.077722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.078209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.078614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.078631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.079066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.079431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.079447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.079863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.080219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.080236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.080741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.081193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.081210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.081585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.082009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.082026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.082461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.082902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.082919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.083260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.083689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.083707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.084137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.084526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.084544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.084979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.085320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.085337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.085769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.086270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.086287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.086721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.087081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.087098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.087470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.087939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.087956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.088438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.088944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.088962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.089336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.089713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.089730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.090220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.090570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.090590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.091034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.091513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.091531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.092011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.092397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.092414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.092826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.093246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.093263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.093773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.094213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.094230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.094660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.095062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.095079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.095435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.095945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.095963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.096336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.096667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.096684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.097055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.097428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.097444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.097876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.098293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.098310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.098742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.099267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.099286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.099748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.100188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.100205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.908 qpair failed and we were unable to recover it. 00:29:52.908 [2024-04-26 09:04:10.100679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.908 [2024-04-26 09:04:10.101176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.101193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.101567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.102042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.102059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.102472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.102724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.102742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.103116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.103533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.103551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.103946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.104372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.104389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.104849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.105217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.105234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.105718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.106168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.106186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.106634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.107112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.107129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.107551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.107977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.107996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.108358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.108779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.108796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.109140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.109562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.109579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.110001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.110367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.110384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.110804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.111237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.111253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.111733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.112150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.112167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.112606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.112965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.112980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.113326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.113691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.113708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.114209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.114609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.114627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.115106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.115543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.115560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.115983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.116460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.116480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.116976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.117406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.117423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.117812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.118259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.118276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.118645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.119068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.119085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.119565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.119938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.119955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.120387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.120788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.120805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.121219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.121651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.121668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.122079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.122502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.122519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.123023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.123478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.123496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.909 qpair failed and we were unable to recover it. 00:29:52.909 [2024-04-26 09:04:10.123877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.124229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.909 [2024-04-26 09:04:10.124246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.124753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.125096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.125113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.125487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.125913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.125929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.126344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.126748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.126765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.127172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.127604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.127621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.128102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.128520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.128537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.128961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.129348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.129365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.129723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.130207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.130224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.130705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.131072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.131089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.131569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.131726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.131743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.132103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.132521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.132538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.132886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.133330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.133346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.133797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.134297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.134314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.134744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.135123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.135140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.135551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.136027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.136043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.136434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.136850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.136867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.137037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.137460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.137477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.137968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.138470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.138495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.138913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.139391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.139408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.139912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.140438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.140468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.140979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.141416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.141434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:52.910 [2024-04-26 09:04:10.141933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.142315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.910 [2024-04-26 09:04:10.142339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:52.910 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.142835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.143280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.143298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.143672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.144102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.144120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.144601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.145081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.145098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.145512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.145969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.145986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.146402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.146845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.146863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.147278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.147777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.147794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.148152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.148677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.148695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.149143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.149354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.149371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.149848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.150212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.150229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.150654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.151066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.151083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.151498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.151928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.151945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.152327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.152776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.152793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.153241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.153679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.153696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.154188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.154615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.154632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.155135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.155549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.155566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.156044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.156566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.156583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.174 [2024-04-26 09:04:10.157091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.157511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.174 [2024-04-26 09:04:10.157528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.174 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.158024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.158522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.158539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.158976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.159477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.159495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.159958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.160464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.160482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.160919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.161333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.161350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.161739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.162235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.162252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.162710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.163139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.163156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.163579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.164031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.164070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.164551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.165017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.165056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.165586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.166130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.166169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.166737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.167170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.167185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.167627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.168085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.168123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.168605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.169152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.169190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.169714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.170099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.170138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.170573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.171093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.171132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.171612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.172152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.172190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.172714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.173125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.173164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.173636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.174148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.174186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.174671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.175127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.175143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.175552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.176084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.176122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.176627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.177039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.177078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.177550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.177956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.177994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.178415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.178955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.178971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.179349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.179805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.179844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.180250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.180710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.180750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.181297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.181733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.181772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.175 qpair failed and we were unable to recover it. 00:29:53.175 [2024-04-26 09:04:10.182283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.175 [2024-04-26 09:04:10.182755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.182793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.183321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.183799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.183838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.184277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.184764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.184803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.185225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.185692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.185708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.186165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.186596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.186635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.187251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.187710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.187750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.188209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.188692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.188731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.189258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.189671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.189719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.190078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.190491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.190530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.191078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.191546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.191585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.192040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.192560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.192599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.193085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.193567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.193583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.194067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.194548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.194587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.195020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.195583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.195622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.196251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.196654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.196706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.197119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.197622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.197661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.198099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.198549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.198566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.199001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.199519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.199560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.199782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.200254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.200293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.200519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.200976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.201015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.201547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.202010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.202027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.202490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.202878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.202916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.203332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.203804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.203820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.204288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.204784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.204826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.205231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.205602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.205618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.205974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.206399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.206438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.206870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.207339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.207377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.176 qpair failed and we were unable to recover it. 00:29:53.176 [2024-04-26 09:04:10.207879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.208348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.176 [2024-04-26 09:04:10.208386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.208874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.209289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.209328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.209807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.210265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.210304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.210778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.211246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.211285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.211685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.212207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.212246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.212749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.213202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.213240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.213660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.214123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.214162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.214667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.215084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.215123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.215522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.215900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.215939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.216352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.216824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.216861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.217234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.217661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.217700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.218223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.218787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.218827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.219321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.219801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.219841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.220228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.220604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.220644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.221084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.221557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.221597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.221855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.222338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.222377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.222866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.223259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.223298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.223700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.224189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.224227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.224703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.225129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.225144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.225563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.225984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.226022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.226533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.226887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.226925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.227410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.227888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.227928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.228469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.228891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.228929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.229416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.230000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.230039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.230559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.231077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.231115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.231540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.231945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.231961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.232414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.232828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.232867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.177 qpair failed and we were unable to recover it. 00:29:53.177 [2024-04-26 09:04:10.233421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.233944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.177 [2024-04-26 09:04:10.233982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.234398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.234879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.234925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.235375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.235854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.235893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.236310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.236690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.236736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.237191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.237606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.237625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.238047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.238527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.238565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.239057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.239515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.239555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.239961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.240483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.240521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.241005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.241470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.241509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.241910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.242430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.242495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.242739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.243117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.243134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.243542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.243969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.243985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.244496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.244939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.244976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.245395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.245813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.245852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.246290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.246680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.246725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.247199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.247590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.247630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.248115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.248372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.248410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.248709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.249117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.249154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.249570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.249981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.250019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.250493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.250912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.250961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.251370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.251830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.251873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.252246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.252724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.252763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.253296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.253699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.253737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.254347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.254765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.254804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.255272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.255737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.255782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.256192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.256677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.256717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.257189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.257724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.257740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.258168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.258673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.258690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.259161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.259631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.259671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.178 qpair failed and we were unable to recover it. 00:29:53.178 [2024-04-26 09:04:10.260173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.178 [2024-04-26 09:04:10.260581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.260620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.261081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.261531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.261570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.262038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.262498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.262537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.263017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.263498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.263538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.264067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.264531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.264569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.264995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.265398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.265414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.265864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.266298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.266336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.266767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.266988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.267004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.267351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.267815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.267855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.268333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.268737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.268776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.269202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.269635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.269651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.270085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.270470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.270509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.271011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.271224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.271262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.271696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.272210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.272226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.272598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.273143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.273182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.273659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.274134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.274172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.274731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.275231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.275269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.275748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.276228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.276267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.276694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.277184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.277222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.277778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.278177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.278215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.278738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.279151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.279191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.279602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.280108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.280146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.280644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.281070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.281109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.281524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.281917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.281955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.282504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.282909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.282954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.283340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.283811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.283850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.284267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.284723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.179 [2024-04-26 09:04:10.284763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.179 qpair failed and we were unable to recover it. 00:29:53.179 [2024-04-26 09:04:10.285173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.285385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.285423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.285828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.286296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.286312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.286714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.287173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.287211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.287632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.288046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.288085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.288640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.289041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.289079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.289490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.290041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.290079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.290489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.291033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.291072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.291609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.292018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.292057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.292518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.293064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.293101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.293584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.294136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.294174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.294646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.295118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.295158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.295397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.295955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.295994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.296479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.296962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.297001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.297533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.298005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.298043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.298587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.299050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.299089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.299517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.300038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.300076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.300576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.301048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.301088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.301568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.302013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.302051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.302531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.303018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.303056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.303545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.303809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.303847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.304428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.304828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.304867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.305400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.305883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.305922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.306192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.306609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.306648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.306889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.307337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.307376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.307782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.308259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.308297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.308835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.309294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.309332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.309875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.310373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.310411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.310918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.311472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.311511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.312093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.312326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.312342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.312780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.313304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.313343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.313891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.314434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.314482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.314955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.315213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.315251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.315725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.316187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.316203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.180 qpair failed and we were unable to recover it. 00:29:53.180 [2024-04-26 09:04:10.316618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.180 [2024-04-26 09:04:10.317024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.317040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.317407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.317753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.317792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.318298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.319748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.319778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.320320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.320541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.320557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.320926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.321304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.321342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.321764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.322214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.322230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.322686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.323119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.323157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.323570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.324037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.324074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.324577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.325079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.325118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.325359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.325828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.325867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.326348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.326876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.326915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.327392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.327812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.327851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.328226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.328635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.328675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.329080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.330589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.330618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.331145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.331574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.331614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.332174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.332643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.332683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.333112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.334125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.334153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.334640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.335576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.335602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.335777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.336222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.336261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.336753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.337163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.337202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.337496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.337954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.337992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.338520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.339002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.339041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.339474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.339950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.339988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.340487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.340947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.340986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.341423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.341970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.342010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.342475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.342912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.342952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.343419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.343921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.343961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.344505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.344903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.344942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.345436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.345913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.345951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.346511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.346929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.346967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.347436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.347923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.347962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.348357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.348813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.348852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.349360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.349913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.349952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.181 qpair failed and we were unable to recover it. 00:29:53.181 [2024-04-26 09:04:10.350375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.351260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.181 [2024-04-26 09:04:10.351287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.351749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.352589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.352615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.353037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.354179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.354207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.354724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.356029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.356057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.356580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.357115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.357131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.357548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.357729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.357765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.358238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.358783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.358825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.359247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.359665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.359704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.359993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.360464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.360503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.360976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.361206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.361244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.361775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.362311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.362350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.362770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.363174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.363212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.363611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.364051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.364067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.364492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.364962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.365000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.365551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.366029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.366068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.366581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.367116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.367155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.367623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.368049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.368065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.368536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.368988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.369026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.369480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.370000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.370049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.370464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.370861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.370877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.371251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.371657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.371673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.372093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.372475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.372514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.373043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.373529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.373545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.373925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.374263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.374278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.374660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.374875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.374891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.375233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.375645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.375684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.375919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.376343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.376381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.376947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.377471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.377510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.377995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.378491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.378530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.379023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.379506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.379544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.379821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.380335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.380372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.380862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.381379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.381395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.381819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.382276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.382314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.182 [2024-04-26 09:04:10.382724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.383248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.182 [2024-04-26 09:04:10.383293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.182 qpair failed and we were unable to recover it. 00:29:53.183 [2024-04-26 09:04:10.383771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.183 [2024-04-26 09:04:10.384193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.183 [2024-04-26 09:04:10.384232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.183 qpair failed and we were unable to recover it. 00:29:53.183 [2024-04-26 09:04:10.384525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.183 [2024-04-26 09:04:10.385048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.183 [2024-04-26 09:04:10.385086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.183 qpair failed and we were unable to recover it. 00:29:53.183 [2024-04-26 09:04:10.385614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.183 [2024-04-26 09:04:10.386070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.183 [2024-04-26 09:04:10.386108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.386685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.387175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.387214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.387679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.388137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.388175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.388700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.389206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.389244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.389720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.390144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.390159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.390683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.391042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.391081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.391578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.392013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.392061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.392517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.392992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.393037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.393473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.393964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.393980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.394357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.394765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.394804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.395274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.395671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.395711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.396244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.396671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.396710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.397174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.397554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.397594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.398017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.398482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.398521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.399022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.399488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.399527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.399983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.400432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.400480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.400953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.401411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.401460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.401888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.402437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.402495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.402973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.403470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.403510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.403743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.404207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.404255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.404690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.405095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.405134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.405610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.406067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.406105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.406592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.407040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.407056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.407399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.407815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.407855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.408349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.408818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.408857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.409272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.409735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.409751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.184 qpair failed and we were unable to recover it. 00:29:53.184 [2024-04-26 09:04:10.410109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.410605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-04-26 09:04:10.410644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.185 qpair failed and we were unable to recover it. 00:29:53.185 [2024-04-26 09:04:10.411052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.411461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.411506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.185 qpair failed and we were unable to recover it. 00:29:53.185 [2024-04-26 09:04:10.411918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.412334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.412353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.185 qpair failed and we were unable to recover it. 00:29:53.185 [2024-04-26 09:04:10.412731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.413142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.413179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.185 qpair failed and we were unable to recover it. 00:29:53.185 [2024-04-26 09:04:10.413712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.414285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.414337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.185 qpair failed and we were unable to recover it. 00:29:53.185 [2024-04-26 09:04:10.414856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.415390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.415430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.185 qpair failed and we were unable to recover it. 00:29:53.185 [2024-04-26 09:04:10.415855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.416293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-04-26 09:04:10.416332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.185 qpair failed and we were unable to recover it. 00:29:53.185 [2024-04-26 09:04:10.416817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.417238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.417254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.417627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.418039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.418055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.418555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.418970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.418985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.419404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.419938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.419977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.420509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.421011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.421050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.421284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.421752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.421768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.422126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.422602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.422644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.423166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.423624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.423640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.424067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.424475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.424515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.425073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.425584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.425623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.426041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.426586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.426626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.427051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.427536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.427552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.427996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.428531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.428571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.429163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.429610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.429626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.430059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.430510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.430549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.431029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.431500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.431539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.432004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.432473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.432512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.432990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.433465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.433510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.434005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.434205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.434257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.434809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.435174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.435190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.435674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.436143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.450 [2024-04-26 09:04:10.436181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.450 qpair failed and we were unable to recover it. 00:29:53.450 [2024-04-26 09:04:10.436597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.437085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.437123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.437666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.438179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.438195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.438650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.439171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.439209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.439685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.440250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.440289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.440525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.441003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.441041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.441519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.441990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.442029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.442562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.443032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.443069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.443565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.444040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.444078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.444568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.445019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.445035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.445554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.446030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.446046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.446484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.446948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.446986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.447474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.447899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.447937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.448434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.448854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.448893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.449370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.449837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.449877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.450426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.450910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.450948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.451376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.451901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.451917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.452403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.452898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.452937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.453413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.453898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.453937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.454474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.454891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.454929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.455409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.455875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.455915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.456301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.456754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.456793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.457088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.457476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.457493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.457915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.458329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.458345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.458694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.459245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.459291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.459741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.460178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.460217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.460695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.461103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.461141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.461675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.462099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.462137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.462553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.462973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.463011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.451 qpair failed and we were unable to recover it. 00:29:53.451 [2024-04-26 09:04:10.463419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.451 [2024-04-26 09:04:10.463882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.463922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.464410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.464870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.464886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.465260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.465726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.465765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.466206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.466755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.466795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.467346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.467745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.467784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.468273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.468763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.468802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.469216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.469429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.469444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.469824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.470258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.470274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.470633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.470988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.471026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.471509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.471970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.472008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.472542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.472989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.473027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.473448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.473852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.473890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.474355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.474760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.474800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.475302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.475708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.475747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.476301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.476752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.476791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.477250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.477707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.477747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcda4000b90 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.478212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.478735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.478758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.479193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.479600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.479618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.480104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.480517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.480557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.480983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.481363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.481401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.481830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.482302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.482340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.482886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.483335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.483373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.485132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.485573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.485592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.486041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.486432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.486448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.486868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.487259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.487297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.487769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.488208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.488225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.488667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.489170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.489209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.489676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.490221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.452 [2024-04-26 09:04:10.490263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.452 qpair failed and we were unable to recover it. 00:29:53.452 [2024-04-26 09:04:10.490736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.491084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.491100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.491532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.491897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.491913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.492252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.492668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.492683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.493122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.493544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.493560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.493967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.494319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.494335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.494783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.495258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.495274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.495756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.496257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.496273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.496800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.497214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.497229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.497608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.498050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.498069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.498572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.499047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.499063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.499484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.499956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.499971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.500374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.500735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.500751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.501142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.501479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.501496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.501943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.502299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.502315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.502832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.503260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.503276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.503699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.504056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.504072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.504502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.504921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.504937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.505360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.505838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.505854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.506269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.506687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.506706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.507115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.507542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.507558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.507984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.508344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.508360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.508836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.509310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.509326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.509678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.510095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.510111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.510470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.510982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.510998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.511411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.511845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.511861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.512313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.512740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.512756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.513259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.513602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.513618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.514052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.514472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.514488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.453 qpair failed and we were unable to recover it. 00:29:53.453 [2024-04-26 09:04:10.514989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.515358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.453 [2024-04-26 09:04:10.515373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.515595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.516070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.516085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.516590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.516945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.516961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.517439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.517889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.517906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.518317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.518752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.518768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.519170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.519578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.519594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.520034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.520504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.520519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.521021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.521493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.521508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.522042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.522447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.522468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.522969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.523468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.523484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.523828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.524281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.524296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.524781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.525256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.525272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.525710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.526111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.526127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.526622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.527118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.527134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.527614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.528131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.528147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.528495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.528969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.528985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.529466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.529940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.529956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.530380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.530806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.530822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.530992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.531343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.531358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.531840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.532341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.532356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.532569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.533022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.533038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.533493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.533994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.534010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.534494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.534931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.534947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.454 [2024-04-26 09:04:10.535377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.535894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.454 [2024-04-26 09:04:10.535910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.454 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.536341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.536766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.536782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.537305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.537723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.537739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.538240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.538717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.538732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.539259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.539763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.539779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.540265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.540779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.540795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.541205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.541679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.541696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.542191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.542713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.542729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.543090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.543590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.543606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.544039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.544463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.544478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.544982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.545405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.545420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.545923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.546352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.546367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.546799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.547318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.547333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.547695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.547920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.547936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.548360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.548573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.548589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.549072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.549572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.549588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.550016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.550442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.550462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.550985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.551403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.551419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.551913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.552396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.552414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.552955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.553362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.553377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.553879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.554093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.554108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.554610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.555015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.555031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.555442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.555845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.555861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.556362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.556860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.556876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.557253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.557610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.557626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.558125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.558644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.558660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.559106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.559537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.559553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.559928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.560370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.560386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.560809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.561309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.455 [2024-04-26 09:04:10.561324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.455 qpair failed and we were unable to recover it. 00:29:53.455 [2024-04-26 09:04:10.561774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.562198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.562214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.562719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.563144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.563159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.563613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.564116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.564131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.564651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.565028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.565043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.565498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.565994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.566010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.566422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.566933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.566949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.567457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.567861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.567878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.568375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.568817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.568833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.569311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.569808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.569824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.570306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.570753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.570768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.571274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.571634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.571650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.572154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.572597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.572612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.573113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.573612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.573628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.574140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.574288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.574303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.574804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.575277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.575293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.575743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.576235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.576251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.576729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.577083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.577098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.577519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.578017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.578032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.578521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.579013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.579029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.579392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.579855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.579871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.580353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.580840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.580855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.581346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.581832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.581848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.582355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.582794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.582811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.583231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.583648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.583664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.584098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.584512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.584528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.584951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.585358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.585373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.585787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.586211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.586226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.586657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.587029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.587044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.456 qpair failed and we were unable to recover it. 00:29:53.456 [2024-04-26 09:04:10.587515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.456 [2024-04-26 09:04:10.587875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.587890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.588134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.588597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.588613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.589049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.589500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.589516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.589996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.590142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.590157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.590524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.590947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.590963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.591466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.591986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.592002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.592429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.592952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.592968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.593395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.593846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.593863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.594293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.594765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.594781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.595006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.595414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.595430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.595909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.596413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.596428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.596882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.597316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.597332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.597691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.598092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.598112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.598475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.598894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.598910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.599414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.599817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.599833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.600222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.600697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.600713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.601191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.601666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.601682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.602205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.602716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.602732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.603266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.603727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.603742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.604152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.604586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.604602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.605008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.605505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.605522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.605990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.606401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.606416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.606950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.607454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.607470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.607977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.608455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.608470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.608921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.609399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.609414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.609831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.610260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.610276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.610661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.611114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.611130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.611560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.611998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.612013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.612497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.613018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.457 [2024-04-26 09:04:10.613034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.457 qpair failed and we were unable to recover it. 00:29:53.457 [2024-04-26 09:04:10.613513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.613926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.613942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.614425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.614856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.614872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.615322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.615799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.615815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.616192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.616579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.616595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.617043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.617544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.617560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.617987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.618365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.618388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.618814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.619322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.619339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.619781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.620196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.620216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.620733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.621262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.621279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.621709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.622084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.622100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.622486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.622885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.622900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.623384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.623814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.623829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.624261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.624667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.624682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.625087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.625300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.625315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.625735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.626141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.626157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.626630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.627122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.627138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.627563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.628007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.628023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.628518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.628994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.629009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.629431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.629930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.629946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.630368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.630743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.630759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.631185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.631659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.631675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.632090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.632495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.632511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.632920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.633420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.633435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.633865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.634321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.634337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.634816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.635258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.635274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.635708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.636111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.636127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.636607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.637013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.637028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.637443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.637944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.637959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.458 qpair failed and we were unable to recover it. 00:29:53.458 [2024-04-26 09:04:10.638490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.638981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.458 [2024-04-26 09:04:10.639020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.639599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.640083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.640122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.640655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.641126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.641164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.641636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.642171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.642209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.642505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.642970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.643007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.643436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.643907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.643945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.644390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.644948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.644993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.645505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.646079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.646117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.646685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.647182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.647197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.647651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.648177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.648215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.648768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.649255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.649293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.649776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.650240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.650279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.650851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.651393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.651432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.651905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.652471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.652510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.653043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.653583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.653623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.654104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.654570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.654586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.655009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.655419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.655476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.655980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.656426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.656475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.656942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.657503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.657542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.658120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.658661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.658700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.659154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.659691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.659730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.660207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.660687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.660726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.661245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.661714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.661753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.662241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.662784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.662823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.663105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.663586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.663625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.459 qpair failed and we were unable to recover it. 00:29:53.459 [2024-04-26 09:04:10.664112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.459 [2024-04-26 09:04:10.664556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.664595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.664887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.665428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.665477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.665967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.666510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.666549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.666840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.667256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.667294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.667830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.668355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.668393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.668935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.669469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.669509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.670061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.670620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.670659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.670946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.671353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.671369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.671881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.672433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.672478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.673034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.673516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.673555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.674086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.674628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.674667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.675164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.675567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.675606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.676180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.676699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.676738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.677257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.677726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.677765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.678296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.678773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.678813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.679383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.679908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.679946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.680442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.680979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.681018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.681576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.682055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.682092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.682625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.683164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.683201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.683741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.684273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.684311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.684871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.685346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.685384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.685862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.686398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.686437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.686997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.687498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.687515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.687961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.688439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.688486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.689017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.689477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.689494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.689986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.690440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.690487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.460 [2024-04-26 09:04:10.691031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.691520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.460 [2024-04-26 09:04:10.691536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.460 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.692047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.692547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.692563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.692976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.693340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.693378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.693901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.694365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.694404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.694948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.695517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.695556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.696115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.696663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.696703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.697255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.697712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.697758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.698298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.698757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.698797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.699292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.699812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.699852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.700382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.700804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.700843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.701400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.701978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.702018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.702570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.703038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.703076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.703619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.704075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.704113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.704666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.705206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.705243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.725 [2024-04-26 09:04:10.705710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.706192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.725 [2024-04-26 09:04:10.706242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.725 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.706462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.706947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.706984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.707473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.707952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.707996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.708384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.708921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.708960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.709168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.709670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.709709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.710184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.710686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.710702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.711211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.711733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.711772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.712271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.712750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.712789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.713259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.713780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.713820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.714377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.714944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.714984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.715445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.715956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.715994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.716287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.716784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.716823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.717310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.717866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.717905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.718394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.718941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.718979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.719246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.719679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.719718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.720271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.720821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.720860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.721397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.721870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.721909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.722403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.722932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.722971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.723542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.724067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.724105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.724590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.725115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.725154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.725617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.726162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.726200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.726746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.727243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.727280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.727840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.728368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.728406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.728925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.729411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.729473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.729825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.730342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.730381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.730921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.731391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.731430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.726 [2024-04-26 09:04:10.731993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.732536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.726 [2024-04-26 09:04:10.732575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.726 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.733059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.733362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.733378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.733616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.734104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.734120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.734584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.735132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.735171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.735661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.736131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.736169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.736623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.736994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.737033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.737589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.737968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.738007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.738490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.738963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.739001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.739492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.739952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.739990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.740466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.740989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.741027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.741614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.742070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.742108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.742597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.743084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.743121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.743418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.743829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.743845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.744350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.744817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.744857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.745290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.745706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.745746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.746173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.746698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.746736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.747207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.747673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.747712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.748187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.748641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.748680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.749169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.749682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.749722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.750213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.750675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.750714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.751187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.751733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.751772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.752182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.752725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.752764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.753230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.753735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.753775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.754255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.754824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.754863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.755350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.755868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.755906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.756313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.756857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.756895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.757434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.757942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.757981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.758477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.759000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.759043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.759554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.759902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.727 [2024-04-26 09:04:10.759940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.727 qpair failed and we were unable to recover it. 00:29:53.727 [2024-04-26 09:04:10.760494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.760987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.761025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.761494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.761902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.761943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.762325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.762867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.762907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.763371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.763638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.763654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.764158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.764631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.764671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.765141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.765623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.765662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.766141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.766603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.766619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.767075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.767538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.767577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.768114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.768648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.768687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.769203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.769752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.769790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.770294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.770853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.770892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.771374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.771917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.771956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.772206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.772632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.772671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.773233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.773792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.773831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.774291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.774823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.774862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.775263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.775738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.775777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.776315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.776857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.776896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.777470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.777944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.777982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.778513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.779056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.779094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.779648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.780125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.780163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.780699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.781214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.781252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.781763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.782195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.782233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.782793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.783011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.783049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.783271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.783825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.783864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.784350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.784895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.784934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.785403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.785918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.785958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.786440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.786900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.786939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.728 [2024-04-26 09:04:10.787420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.787834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.728 [2024-04-26 09:04:10.787874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.728 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.788338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.788804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.788844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.789337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.789805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.789844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.790435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.790988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.791026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.791522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.791984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.792022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.792488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.792772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.792810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.793208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.793663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.793703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.794173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.794650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.794689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.795174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.795646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.795684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.796146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.796669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.796708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.797268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.797802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.797841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.798397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.798978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.799017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.799490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.800010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.800049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.800600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.801183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.801263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.801694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.802202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.802240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.802746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.803307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.803346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.803898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.804403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.804441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.805009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.805487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.805527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.806039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.806576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.806592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.807058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.807471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.807510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.807993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.808534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.808573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.809065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.809542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.809582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.809927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.810425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.810478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.810985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.811512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.811551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.812120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.812614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.812653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.813205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.813675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.813714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.814202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.814677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.814734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.815209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.815688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.815727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.729 qpair failed and we were unable to recover it. 00:29:53.729 [2024-04-26 09:04:10.816205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.729 [2024-04-26 09:04:10.816585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.816624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.817125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.817577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.817616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.818024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.818477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.818520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.818952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.819433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.819478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.819957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.820412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.820459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.820966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.821417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.821470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.821946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.822481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.822520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.823009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.823410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.823448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.823899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.824441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.824487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.825019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.825575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.825615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.826112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.826645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.826684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.827112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.827561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.827577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.828005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.828447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.828467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.828952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.829419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.829468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.829900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.830337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.830375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.830883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.831408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.831446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.831890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.832342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.832379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.832923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.833411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.833469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.833991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.834472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.834511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.834893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.835366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.835404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.835871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.836341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.836379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.730 qpair failed and we were unable to recover it. 00:29:53.730 [2024-04-26 09:04:10.836802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.730 [2024-04-26 09:04:10.837215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.837252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.837706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.838263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.838301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.838777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.839319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.839356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.839842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.840364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.840402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.840517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22522c0 is same with the state(5) to be set 00:29:53.731 [2024-04-26 09:04:10.841070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.841573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.841595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.842089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.842257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.842270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.842766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.843239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.843278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.843799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.844297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.844336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.844903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.845444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.845461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.845899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.846411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.846464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.846935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.847465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.847504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.848062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.848471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.848510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.848974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.849511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.849551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.850000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.850521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.850560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.851031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.851505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.851544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.852118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.852645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.852657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.853130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.853622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.853661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.854121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.854530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.854542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.855046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.855465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.855504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.856061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.856505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.856518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.856944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.857448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.857506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.858062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.858466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.858506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.859062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.859534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.859574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.859846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.860365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.860403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.860853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.861246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.861268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.861668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.862145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.862183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.862664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.863194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.863206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.863586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.864012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.864050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.731 qpair failed and we were unable to recover it. 00:29:53.731 [2024-04-26 09:04:10.864581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.731 [2024-04-26 09:04:10.865127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.865165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.865695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.866110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.866148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.866683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.866969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.867006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.867512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.868004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.868042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.868592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.869141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.869179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.869702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.870099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.870111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.870533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.870999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.871010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.871463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.871977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.872015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.872496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.872940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.872978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.873536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.873891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.873903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.874402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.874940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.874979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.875476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.875973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.875985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.876423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.876776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.876815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.877211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.877736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.877765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.878274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.878733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.878745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.879176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.879650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.879689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.880180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.880638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.880651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.881164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.881630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.881670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.882248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.882705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.882743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.883251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.883817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.883856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.884355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.884883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.884922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.885472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.886706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.886725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.887249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.887735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.887771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.888175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.888590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.888629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.889105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.889582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.889622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.890207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.890689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.890728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.891224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.891789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.891828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.892314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.892784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.892823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.732 [2024-04-26 09:04:10.893378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.893835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.732 [2024-04-26 09:04:10.893875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.732 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.894344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.894769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.894808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.895287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.895759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.895798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.896393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.896883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.896922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.897337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.897861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.897901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.898388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.898942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.898982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.899444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.899979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.900017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.900574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.901061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.901099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdac000b90 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.901628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.902128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.902174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.902660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.903140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.903179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.903596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.904055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.904095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.904577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.905125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.905164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.905641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.906053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.906092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.906573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.907048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.907086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.907570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.908096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.908135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.908694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.909221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.909259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.909690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.910150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.910188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.910657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.911136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.911175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.911633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.912070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.912110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.912606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.913000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.913039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.913594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.914117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.914155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.914633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.915163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.915179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.915690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.916142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.916180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.916701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.917135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.917173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.917706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.918212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.918250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.918743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.919236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.919275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.919750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.920312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.920350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.920884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.921423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.921577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.922132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.922677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.922724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.733 qpair failed and we were unable to recover it. 00:29:53.733 [2024-04-26 09:04:10.923207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.733 [2024-04-26 09:04:10.923476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.923493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.923830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.924311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.924349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.924815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.925276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.925314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.925769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.926332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.926370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.926890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.927361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.927399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.927808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.928257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.928296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.928776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.929319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.929358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.929911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.930432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.930480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.931012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.931509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.931548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.931965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.932462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.932507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.933043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.933533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.933573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.934037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.934501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.934539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.934767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.935300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.935338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.935793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.936295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.936332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.936817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.937340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.937378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.937664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.938167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.938204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.938751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.939305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.939343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.939768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.940261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.940299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.940756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.941209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.941247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.941732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.941988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.942004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.942433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.942858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.942898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.943397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.943946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.943985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.944493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.944907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.944946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.945476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.946032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.946070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.946634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.947076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.947113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.947674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.948202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.948240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.948775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.949247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.734 [2024-04-26 09:04:10.949285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.734 qpair failed and we were unable to recover it. 00:29:53.734 [2024-04-26 09:04:10.949755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.950283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.950321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.950815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.951263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.951300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.951780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.952306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.952344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.952812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.953354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.953392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.953972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.954516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.954556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.955034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.955547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.955563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.956045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.956495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.956511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.956945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.957404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.957443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.957983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.958527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.958567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.959092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.959615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.959653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.960133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.960390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.960428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.961018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.961499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.961537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.962022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.962517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.962567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.962995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.963445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.963491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.964020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.964485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.964524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.965068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.965497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.965513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.965941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.966482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.966522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:53.735 [2024-04-26 09:04:10.966748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.967141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.735 [2024-04-26 09:04:10.967157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:53.735 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.967676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.968136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.968152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.968639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.969044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.969081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.969559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.970050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.970088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.970645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.971067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.971105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.971657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.972200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.972216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.972700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.973228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.973267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.973840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.974314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.974353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.974902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.975447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.975491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.975972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.976522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.976555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.977014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.977527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.977566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.978029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.978533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.978572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.979119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.979667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.979705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.980262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.980815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.980854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.981318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.981840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.981879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.982296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.982836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.982875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.983424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.983983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.984033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.984492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.985013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.985051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.985514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.985974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.986012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.986490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.987013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.987052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.987528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.988069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.988107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.988684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.989227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.989265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.989813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.990263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.990301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.990851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.991369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.991407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.991882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.992369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.992406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.992955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.993506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.993544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.994026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.994529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.994568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.994990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.995520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.995559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.995977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.996523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.996562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.997093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.997553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.997592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.998120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.998658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.998696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:10.999094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.999598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:10.999637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:11.000192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.000712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.000751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:11.001320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.001795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.001834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:11.002363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.002954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.002994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:11.003546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.004077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.004116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:11.004581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.005098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.005136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:11.005702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.006167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-04-26 09:04:11.006206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-04-26 09:04:11.006784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.007290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.007328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.007801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.008271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.008309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.008852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.009309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.009347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.009894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.010343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.010381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.010897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.011440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.011490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.012020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.012530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.012570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.013125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.013596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.013635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.014206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.014740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.014772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.015308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.015835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.015873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.016380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.016851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.016890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.017437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.017934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.017985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.018493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.019011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.019049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.019533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.019982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.020020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.020479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.021022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.021060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.021518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.022064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.022080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.022579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.023065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.023104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.023609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.024138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.024177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.024658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.025127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.025166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.025657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.026080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.026119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.026611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.027143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.027159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.027665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.028158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.028196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.028672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.029147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.029185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.029714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.029991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.030007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.030419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.030986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.031026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.031483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.031967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.032006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.032510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.033051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.033090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.033646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.034044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.034082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.034616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.035087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.035125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.035603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.036116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.036132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.036656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.037181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.037226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.037779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.038278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.038316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.038803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.039333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.039349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.039775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.040254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.040292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.040775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.041234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.041273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.041873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.042417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.042462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.043023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.043566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.043605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.044098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.044593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.044609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.045050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.045642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.045680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.045953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.046303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.046341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.046820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.047365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.047403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.047887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.048332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.048348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.048855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.049370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.049408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.049913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.050480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.050519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.050993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.051467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.051505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.051914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.052432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.052479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.053014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.053415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.053462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.054019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.054560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.054599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.055092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.055613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.055652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.056147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.056604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.056642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.057197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.057719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.057761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.058287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.058774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.058814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.059302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.059845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.059890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.060398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.060933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.060972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-04-26 09:04:11.061382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-04-26 09:04:11.061870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.061909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.062440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.063027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.063066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.063545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.064051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.064089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.064567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.065112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.065149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.065558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.066050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.066089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.066381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.066956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.066995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.067494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.068019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.068057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.068525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.068991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.069029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.069511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.070055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.070093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.070673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.071213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.071251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.071801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.072305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.072343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.072843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.073258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.073296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.073863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.074371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.074387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.074806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.075290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.075328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.075881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.076110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.076149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.076704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.077175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.077213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.077693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.078169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.078207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.078464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.078965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.079003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.079557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.080077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.080115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.080545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.081046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.081084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.081542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.081956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.081994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.082539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.082990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.083028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.083586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.084112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.084150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.084617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.085164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.085202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.085630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.086121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.086159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.086729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.087191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.087230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.087684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.088208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.088246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.088763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.089308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.089352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.089903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.090186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.090224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.090674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.091102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.091140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.091673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.092234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.092272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.092774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.093296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.093335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.093836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.094376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.094392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.094631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.095060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.095099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.095583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.096069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.096085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.096591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.097069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.097108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.097586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.098060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.098098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.098572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.099116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.099148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.099369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.099879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.099920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.100332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.100787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.100827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.101245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.101733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.101773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.102268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.102795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.102834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.103330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.103756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.103795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.104329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.104851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.104867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.105368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.105870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.105909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.106201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.106660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.106699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.107164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.107657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.107674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-04-26 09:04:11.108126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.108607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-04-26 09:04:11.108646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.109202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.109738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.109776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.110316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.110835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.110874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.111320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.111803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.111819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.112357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.112824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.112863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.113344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.113850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.113866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.114305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.114776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.114815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.115116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.115532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.115572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.116127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.116665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.116680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.117094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.117534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.117572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.118045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.118413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.118429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.118888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.119102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.119118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.119624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.120107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.120145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.120555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.121076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.121114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.121654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.122145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.122183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.122610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.123180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.123218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.123766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.124248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.124287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.124804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.125323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.125339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.125867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.126391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.126429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.126927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.127351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.127389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.127897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.128343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.128381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.128929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.129472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.129511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.130078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.130625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.130664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.130967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.131490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.131529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.132005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.132485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.132502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.133007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.133478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.133517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.133994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.134514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.134553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.134953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.135485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.135523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.136083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.136558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.136596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.137072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.137588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.137627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.138160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.138699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.138737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.139197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.139663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.139703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.140256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.140728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.140768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.141341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.141810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.141849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.142312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.142858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.142897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.143382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.143768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.143807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.144229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.144694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.144733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.145262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.145735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.145774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.146317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.146792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.146809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.147337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.147686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.147703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.147894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.148363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.148410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.148912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.149411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.149467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.150031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.150574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.150613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.151168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.151696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.151735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.152289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.152856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.152895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.153502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.154066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.154105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.154650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.155188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.155226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.155809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.156390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.156428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.157010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.157538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.157576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.158106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.158582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.158622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.159182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.159668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.159707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.160255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.160841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.160880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.161436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.161946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.161983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-04-26 09:04:11.162522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.163071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-04-26 09:04:11.163109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.163613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.164184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.164222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.164775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.165345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.165383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.165995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.166569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.166610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.167169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.167644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.167684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.168116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.168620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.168637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.169090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.169584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.169623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.170173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.170719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.170758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.171332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.171802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.171842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.172336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.172865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.172904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.173408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.173836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.173875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.174380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.174827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.174843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.175338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.175912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.175952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.176470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.177057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.177095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.177627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.178132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.178170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.178769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.179338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.179375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.180003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.180542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.180580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.181157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.181726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.181765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.182393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.182988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.183028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.183588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.184058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.184097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.184611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.185161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.185200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.185781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.186353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.186392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.186954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.187513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.187553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.188149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.188703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.188742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.189241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.189784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.189824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.190350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.190869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.190885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.191398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.192000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.192039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.192588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.193119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.193157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.193722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.194250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.194288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.194803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.195329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.195367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.195921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.196473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.196512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.197073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.197648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.197687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.198242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.198785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.198824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.199410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.199985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.200024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.200492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.200995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.201033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.201599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.202067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.202106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.202635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.203154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.203192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.203769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.204228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.204267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.204800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.205385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.205423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.206011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.206580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.206626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.207127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.207671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.207710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.208291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.208857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.208896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.209465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.209967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.210005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.210583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.211060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.211098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.211612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.212079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.212117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.212627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.213197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.213235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.213795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.214271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.214321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.214865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.215463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.215502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.216061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.216629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.216669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.217231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.217830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.217869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.218487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.219081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.219119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.219679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.220225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.220264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.220763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.221196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.004 [2024-04-26 09:04:11.221235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.004 qpair failed and we were unable to recover it. 00:29:54.004 [2024-04-26 09:04:11.221779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.222270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.222308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.222842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.223407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.223446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.224042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.224564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.224604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.225223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.225736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.225776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.226280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.226805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.226845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.227460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.228055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.228093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.228584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.229035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.229052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.229559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.230136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.230174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.230756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.231194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.231232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.231791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.232339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.232377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.232981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.233558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.233602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.234158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.234689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.234728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.235345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.235811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.235851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.236390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.236958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.236997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.237569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.238028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.238044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.238480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.238933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.238949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.239460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.239976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.240014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.005 [2024-04-26 09:04:11.240601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.241134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.005 [2024-04-26 09:04:11.241172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.005 qpair failed and we were unable to recover it. 00:29:54.268 [2024-04-26 09:04:11.241765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.268 [2024-04-26 09:04:11.242277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.268 [2024-04-26 09:04:11.242293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.268 qpair failed and we were unable to recover it. 00:29:54.268 [2024-04-26 09:04:11.242810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.268 [2024-04-26 09:04:11.243342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.268 [2024-04-26 09:04:11.243380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.268 qpair failed and we were unable to recover it. 00:29:54.268 [2024-04-26 09:04:11.243972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.268 [2024-04-26 09:04:11.244438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.268 [2024-04-26 09:04:11.244488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.268 qpair failed and we were unable to recover it. 00:29:54.268 [2024-04-26 09:04:11.245112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.268 [2024-04-26 09:04:11.245692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.268 [2024-04-26 09:04:11.245732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.268 qpair failed and we were unable to recover it. 00:29:54.268 [2024-04-26 09:04:11.246323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.268 [2024-04-26 09:04:11.246907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.246946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.247518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.247982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.248021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.248555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.249051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.249089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.249675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.250131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.250147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.250582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.251147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.251185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.251750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.252286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.252324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.252851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.253316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.253354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.253915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.254411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.254449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.254907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.255285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.255302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.255732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.256218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.256234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.256757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.257254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.257292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.257779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.258360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.258398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.258980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.259538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.259577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.260077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.260557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.260597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.261202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.261677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.261694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.262215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.262773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.262818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.263403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.263968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.264008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.264592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.265168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.265207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.265831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.266364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.266402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.267005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.267578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.267617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.268161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.268756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.268795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.269292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.269842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.269882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.269 qpair failed and we were unable to recover it. 00:29:54.269 [2024-04-26 09:04:11.270475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.269 [2024-04-26 09:04:11.271045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.271083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.271699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.272179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.272217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.272739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.273282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.273320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.273888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.274509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.274549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.275134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.275709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.275749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.276322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.276861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.276900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.277493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.278076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.278114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.278714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.279224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.279262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.279866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.280375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.280423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.280952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.281532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.281572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.282162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.282739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.282780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.283301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.283763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.283780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.284274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.284712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.284752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.285186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.285751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.285793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.286380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.286945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.286985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.287560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.288002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.288040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.288610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.289096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.289135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.289623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.290183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.290221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.290812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.291325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.291364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.291962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.292523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.292563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.293155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.293737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.293778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.294280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.294864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.270 [2024-04-26 09:04:11.294904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.270 qpair failed and we were unable to recover it. 00:29:54.270 [2024-04-26 09:04:11.295498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.296023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.296062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.296620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.297177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.297215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.297805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.298316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.298355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.298971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.299583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.299623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.300207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.300785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.300824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.301416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.301985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.302025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.302614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.303133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.303171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.303757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.304317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.304356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.304974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.305585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.305625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.306211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.306793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.306833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.307474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.307972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.308011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.308562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.309039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.309077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.309682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.310173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.310224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.310697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.311239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.311277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.311848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.312319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.312357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.312925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.313507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.313547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.314141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.314667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.314707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.315280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.315852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.315892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.316486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.317078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.317094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.317605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.318184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.318223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.318737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.319320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.319358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.319976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.320555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.320607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.321104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.321649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.321694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.271 [2024-04-26 09:04:11.322204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.322791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.271 [2024-04-26 09:04:11.322831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.271 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.323344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.323852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.323892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.324401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.324909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.324949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.325555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.326111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.326149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.326717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.327253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.327291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.327843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.328401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.328440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.329006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.329583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.329623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.330118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.330597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.330636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.331132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.331671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.331712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.332300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.332817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.332838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.333390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.333958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.333998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.334574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.335053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.335092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.335615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.336184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.336223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.336777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.337207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.337246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.337843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.338407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.338461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.338971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.339471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.339489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.339947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.340507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.340543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.341019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.341543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.341583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.342093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.342575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.342595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.343112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.343542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.343559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.344040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.344570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.344618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.345071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.345631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.345672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.346257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.346823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.346864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.347374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.347910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.347928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.348325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.348808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.348848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.272 qpair failed and we were unable to recover it. 00:29:54.272 [2024-04-26 09:04:11.349445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.272 [2024-04-26 09:04:11.350022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.350076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.350667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.351151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.351203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.351813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.352430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.352483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.353089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.353691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.353742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.354228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.354787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.354804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.355297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.355771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.355789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.356240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.356705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.356755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.357197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.357733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.357773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.358377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.358824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.358862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.359424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.360041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.360080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.360591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.361173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.361211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.361760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.362233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.362272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.362873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.363479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.363527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.363964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.364341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.364380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.364938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.365464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.365481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.366030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.366537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.366554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.367093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.367632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.367672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.368238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.368781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.368821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.369411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.369930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.369969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.370534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.371148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.371187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.371780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.372351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.372368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.372821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.373307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.373345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.373923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.374446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.374499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.273 qpair failed and we were unable to recover it. 00:29:54.273 [2024-04-26 09:04:11.375011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.273 [2024-04-26 09:04:11.375515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.375556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.376132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.376651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.376668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.377223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.377711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.377751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.378292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.378763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.378802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.379418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.379885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.379924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.380500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.380985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.381024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.381573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.382134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.382172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.382750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.383310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.383348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.383960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.384523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.384562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.385143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.385644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.385684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.386264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.386845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.386885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.387391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.387986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.388025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.388642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.389219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.389263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.389783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.390369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.390408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.391018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.391600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.391639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.392225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.392765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.392805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.393324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.393860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.393900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.394495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.394977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.395015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.395591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.396193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.396230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.396797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.397342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.274 [2024-04-26 09:04:11.397380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.274 qpair failed and we were unable to recover it. 00:29:54.274 [2024-04-26 09:04:11.397956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.398505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.398545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.399105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.399667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.399708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.400269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.400814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.400854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.401301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.401855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.401894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.402492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.403062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.403100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.403601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.404183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.404222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.404832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.405422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.405469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.406048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.406628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.406669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.407272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.407866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.407906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.408474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.409026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.409065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.409658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.410234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.410273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.410702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.411285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.411323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.411937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.412506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.412546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.413117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.413681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.413720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.414217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.414772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.414812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.415401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.415979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.416019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.416539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.417128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.417166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.417736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.418271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.418309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.418920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.419511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.419551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.420139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.420703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.420743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.421304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.421845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.421884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.422479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.423077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.423115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.423668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.424225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.424264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.424860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.425405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.275 [2024-04-26 09:04:11.425444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.275 qpair failed and we were unable to recover it. 00:29:54.275 [2024-04-26 09:04:11.426042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.426584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.426626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.427194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.427663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.427703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.428305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.428882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.428922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.429506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.429994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.430032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.430608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.431072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.431111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.431689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.432254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.432292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.432808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.433368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.433406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.434008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.434550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.434589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.435207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.435784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.435824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.436332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.436919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.436959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.437572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.438148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.438187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.438789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.439403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.439442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.440021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.440531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.440571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.441173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.441741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.441781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.442361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.442829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.442881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.443385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.443976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.444015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.444517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.445080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.445118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.445707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.446250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.446288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.446864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.447405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.447443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.448044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.448551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.448598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.449165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.449748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.449788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.450375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.450954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.450993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.451587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.452190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.452228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.452820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.453266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.276 [2024-04-26 09:04:11.453304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.276 qpair failed and we were unable to recover it. 00:29:54.276 [2024-04-26 09:04:11.453853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.454398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.454436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.455032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.455637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.455677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.456261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.456843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.456883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.457477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.458070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.458108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.458626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.459191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.459229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.459817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.460391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.460430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.461028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.461574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.461614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.462185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.462664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.462703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.463260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.463823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.463862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.464410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.464984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.465023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.465607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.466170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.466209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.466803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.467280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.467319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.467873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.468436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.468487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.469039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.469543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.469583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.470077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.470565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.470605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.471169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.471733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.471773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.472383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.472960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.472999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.473581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.474132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.474170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.474765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.475340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.475378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.475988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.476548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.476587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.477172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.477753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.477793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.478394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.478950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.277 [2024-04-26 09:04:11.478990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.277 qpair failed and we were unable to recover it. 00:29:54.277 [2024-04-26 09:04:11.479495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.480077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.480115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.480630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.481175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.481212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.481804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.482387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.482426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.483007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.483549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.483589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.484089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.484673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.484713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.485194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.485751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.485792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.486399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.487002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.487042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.487641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.488232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.488269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.488840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.489320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.489358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.489970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.490525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.490564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.491149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.491729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.491769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.492281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.492860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.492899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.493523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.493960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.493999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.494561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.495166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.495204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.495792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.496300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.496338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.496927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.497490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.497530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.498118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.498627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.498668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.499205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.499764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.499803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.500369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.500923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.500962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.501474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.502026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.502065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.502656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.503234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.503273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.503843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.504393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.504432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.505034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.505614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.505653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.506203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.506787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.506804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.507242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.507759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-04-26 09:04:11.507805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-04-26 09:04:11.508409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-04-26 09:04:11.509009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-04-26 09:04:11.509025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-04-26 09:04:11.509528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-04-26 09:04:11.510015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-04-26 09:04:11.510031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-04-26 09:04:11.510461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-04-26 09:04:11.510976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-04-26 09:04:11.510992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.511538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.512073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.512089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.512613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.513088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.513105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.513649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.514219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.514258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.514871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.515442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.515492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.516067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.516615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.516632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.517190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.517709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.517749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.518337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.518843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.518896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.519418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.519958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.520011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.520485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.521066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.521105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.521675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.522236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.522274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.522863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.523417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.523464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.524061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.524644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.524683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.525252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.525811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.525851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.526434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.527069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.527086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.527582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.528118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.528134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.528677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.529199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.529237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.529807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.530343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.530381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.530987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.531572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.531612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.532180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.532613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.532654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.533207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.533793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.533810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.534322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.534877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.534917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.543 qpair failed and we were unable to recover it. 00:29:54.543 [2024-04-26 09:04:11.535535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.536113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.543 [2024-04-26 09:04:11.536151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.536743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.537239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.537278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.537865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.538373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.538411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.538998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.539484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.539523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.540092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.540669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.540708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.541272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.541752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.541798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.542254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.542835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.542888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.543495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.543997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.544036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.544627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.545188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.545226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.545759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.546280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.546318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.546901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.547415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.547463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.548015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.548564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.548604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.549115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.549647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.549687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.550279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.550865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.550906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.551484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.552025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.552064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.552658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.553235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.553274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.553773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.554307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.554347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.554940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.555519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.555558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.556161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.556642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.556682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.557175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.557715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.557754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.558346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.558934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.558973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.559493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.560071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.560110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.560621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.561159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.561197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.561743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.562331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.562381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.562876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.563404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.563420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.563886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.564444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.564493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.565071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.565644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.565685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.566242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.566784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.566801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.567326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.567831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.567871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.544 qpair failed and we were unable to recover it. 00:29:54.544 [2024-04-26 09:04:11.568427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.544 [2024-04-26 09:04:11.569025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.569064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.569649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.570079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.570130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.570643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.571222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.571260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.571849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.572407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.572445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.573032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.573572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.573612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.574217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.574779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.574819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.575428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.575973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.576013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.576625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.577201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.577245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.577844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.578422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.578469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.579045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.579604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.579644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.580256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.580792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.580832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.581407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.581904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.581944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.582528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.583015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.583032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.583587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.584169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.584207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.584800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.585316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.585354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.585857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.586359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.586397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.586906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.587483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.587523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.588111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.588673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.588713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.589273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.589811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.589829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.590284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.590700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.590740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.591307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.591892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.591933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.592528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.593048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.593087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.593524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.593989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.594006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.594491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.595074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.595113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.595726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.596174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.596213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.596767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.597358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.597375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.597848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.598367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.598405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.598940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.599438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.599494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.545 [2024-04-26 09:04:11.600052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.600604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.545 [2024-04-26 09:04:11.600644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.545 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.601227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.601792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.601844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.602354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.602894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.602933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.603500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.604072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.604110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.604683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.605115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.605131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.605601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.606160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.606199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.606789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.607389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.607427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.608023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.608582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.608622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.609204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.609743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.609783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.610391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.610969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.611010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.611551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.612111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.612150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.612747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.613183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.613222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.613768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.614267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.614304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.614822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.615407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.615447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.615980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.616549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.616588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.617164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.617668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.617707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.618264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.618843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.618883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.619448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.619922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.619959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.620498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.620933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.620972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.621573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.622079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.622118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.622700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.623316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.623355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.623931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.624470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.624510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.625013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.625578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.625617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.626209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.626770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.626787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.627249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.627768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.627808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.628303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.628859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.628899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.629476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.630041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.630080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.630691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.631230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.631268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.631852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.632447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.632511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.546 qpair failed and we were unable to recover it. 00:29:54.546 [2024-04-26 09:04:11.633103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.633543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.546 [2024-04-26 09:04:11.633583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.634078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.634670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.634690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.635165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.635698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.635738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.636273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.636869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.636908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.637523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.638131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.638170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.638677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.639232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.639271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.639856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.640283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.640299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.640813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.641329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.641346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.641897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.642344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.642361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.642824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.643364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.643381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.643939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.644436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.644458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.644938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.645442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.645465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.645995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.646510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.646528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.647054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.647567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.647583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.648134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.648649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.648666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.649207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.649693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.649710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.650156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.650666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.650684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.651232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.651722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.651738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.652183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.652672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.652689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.653198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.653686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.653703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.654154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.654644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.654662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.655156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.655689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.655706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.656168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.656611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.656629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.547 [2024-04-26 09:04:11.657149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.657621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.547 [2024-04-26 09:04:11.657638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.547 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.658101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.658531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.658548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.659059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.659456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.659473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.659969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.660504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.660521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.661058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.661546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.661564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.662056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.662592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.662610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.663152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.663616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.663633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.664195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.664738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.664755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.665200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.665779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.665797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.666332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.666868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.666885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.667419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.667916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.667932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.668460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.668968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.668984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.669497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.669936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.669952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.670469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.670910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.670927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.671390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.671905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.671922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.672455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.672978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.672994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.673533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.674045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.674062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.674573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.675005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.675022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.675538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.675982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.675999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.676524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.677076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.677093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.677592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.678058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.678075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.678639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.679079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.679097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.679608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.680068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.680084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.680596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.681108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.681125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.681646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.682185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.682202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.682757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.683258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.683275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.683865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.684473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.684512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.685086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.685559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.685598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.686143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.686723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.686741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-04-26 09:04:11.687190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.687707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.548 [2024-04-26 09:04:11.687752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.688334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.688820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.688860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.689426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.690049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.690088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.690675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.691162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.691201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.691794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.692284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.692322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.692860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.693301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.693339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.693964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.694570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.694610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.695172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.695704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.695721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.696262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.696818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.696857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.697363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.697797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.697836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.698415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.699023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.699063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.699596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.700076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.700114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.700618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.701134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.701172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.701797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.702237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.702253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.702713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.703251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.703289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.703813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.704377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.704415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.705003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.705515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.705553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.705980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.706479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.706518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.707025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.707579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.707620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.708214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.708703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.708743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.709259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.709731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.709771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.710312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.710856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.710896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.711388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.711938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.711978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.712578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.713002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.713040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.713586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.714074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.714113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.714679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.715260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.715299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.715806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.716284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.716301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.716864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.717387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.717425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.717963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.718557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.718597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-04-26 09:04:11.719207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.719799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.549 [2024-04-26 09:04:11.719838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.720425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.721028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.721067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.721634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.722128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.722166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.722740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.723146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.723163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.723688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.724196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.724230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.724665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.725155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.725193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.725746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.726253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.726291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.726883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.727320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.727359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.727859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.728444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.728496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.729090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.729679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.729718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.730285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.730783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.730822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.731418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.732041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.732080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.732692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.733271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.733309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.733896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.734315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.734353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.734910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.735289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.735306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.735824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.736383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.736421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.736973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.737536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.737554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.738073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.738631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.738671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.739214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.739785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.739825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.740332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.740859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.740899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.741489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.742052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.742090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.742663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.743241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.743280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.743852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.744365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.744410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.745010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.745579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.745620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.746142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.746606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.746623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.747168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.747674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.747691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.748150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.748730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.748769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.749316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.749861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.749902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.750531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.751125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.751164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-04-26 09:04:11.751663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.550 [2024-04-26 09:04:11.752208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.752248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.752821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.753369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.753408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.753884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.754436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.754489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.755023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.755572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.755619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.756127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.756617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.756657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.757252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.757690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.757731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.758254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.758787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.758828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.759363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.759822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.759862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.760381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.760913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.760953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.761549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.762129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.762167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.762677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.763165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.763204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.763773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.764307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.764346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.764930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.765448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.765497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.766073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.766605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.766644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.767047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.767536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.767577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.768074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.768607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.768648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.769150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.769629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.769675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.770191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.770705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.770744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.771292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.771833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.771873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.772463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.772971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.773010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.773594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.774030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.774068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.774636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.775195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.775233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.775754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.776334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.776373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.776924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.777444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.777492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.778009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.778594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.778634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.779156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.779702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.779741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.780202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.780677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.780694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.781214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.781659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.781676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.782148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.782644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.782661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-04-26 09:04:11.783173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.551 [2024-04-26 09:04:11.783717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.552 [2024-04-26 09:04:11.783758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.552 qpair failed and we were unable to recover it. 00:29:54.552 [2024-04-26 09:04:11.784396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.552 [2024-04-26 09:04:11.784900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.552 [2024-04-26 09:04:11.784917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.552 qpair failed and we were unable to recover it. 00:29:54.815 [2024-04-26 09:04:11.785419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-04-26 09:04:11.785879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-04-26 09:04:11.785896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-04-26 09:04:11.786398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-04-26 09:04:11.786785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-04-26 09:04:11.786825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-04-26 09:04:11.787322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-04-26 09:04:11.787882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-04-26 09:04:11.787922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-04-26 09:04:11.788434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-04-26 09:04:11.788943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-04-26 09:04:11.788982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.815 qpair failed and we were unable to recover it. 00:29:54.815 [2024-04-26 09:04:11.789602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-04-26 09:04:11.790163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.815 [2024-04-26 09:04:11.790201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.790696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.791181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.791219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.791851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.792376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.792415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.792930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.793497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.793536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.794054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.794618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.794659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.795216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.795696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.795713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.796156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.796701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.796741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.797265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.797751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.797791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.798362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.798960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.799000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.799603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.800163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.800180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.800628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.801067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.801084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.801596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.802047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.802085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.802686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.803226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.803265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.803783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.804224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.804263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.804872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.805491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.805531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.806036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.806622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.806662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.807151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.807733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.807773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.808390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.808946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.808987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.809607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.810091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.810108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.810549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.810998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.811043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.811619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.812107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.812145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.812642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.813150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.813189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.813715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.814286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.814325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.814885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.815416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.815468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.815970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.816585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.816622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.817104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2237380 Killed "${NVMF_APP[@]}" "$@" 00:29:54.816 [2024-04-26 09:04:11.821471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.821510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 09:04:11 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:29:54.816 [2024-04-26 09:04:11.821931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 09:04:11 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:54.816 [2024-04-26 09:04:11.822496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.822517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 09:04:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:54.816 09:04:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:54.816 [2024-04-26 09:04:11.823045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 09:04:11 -- common/autotest_common.sh@10 -- # set +x 00:29:54.816 [2024-04-26 09:04:11.823573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.823593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.824045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.824581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.824604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.825124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.825568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.825587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.826034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.826497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.826515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.826966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.827488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.827507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.827911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.828447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.828472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.829053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.829571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.829589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.830063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.830580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.830599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 [2024-04-26 09:04:11.831051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 09:04:11 -- nvmf/common.sh@470 -- # nvmfpid=2238246 00:29:54.816 09:04:11 -- nvmf/common.sh@471 -- # waitforlisten 2238246 00:29:54.816 09:04:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:54.816 [2024-04-26 09:04:11.831542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.831568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 09:04:11 -- common/autotest_common.sh@817 -- # '[' -z 2238246 ']' 00:29:54.816 [2024-04-26 09:04:11.832131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 09:04:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.816 09:04:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:54.816 [2024-04-26 09:04:11.832582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.832598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.816 qpair failed and we were unable to recover it. 00:29:54.816 09:04:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.816 [2024-04-26 09:04:11.833004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 09:04:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:54.816 09:04:11 -- common/autotest_common.sh@10 -- # set +x 00:29:54.816 [2024-04-26 09:04:11.833374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.816 [2024-04-26 09:04:11.833390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.833831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.834191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.834206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.834722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.835103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.835117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.835630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.835990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.836004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.836571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.837053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.837068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.837637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.838072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.838087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.838531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.838959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.838975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.839436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.839996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.840013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.840532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.841079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.841095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.841556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.841928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.841942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.842373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.842805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.842819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.843264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.843636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.843650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.844027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.844481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.844496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.845017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.845591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.845606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.846049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.846465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.846482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.846868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.847219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.847232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.847696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.848076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.848091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.848488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.848918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.848932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.849447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.849914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.849942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.850490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.850954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.850982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.851018] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22522c0 (9): Bad file descriptor 00:29:54.817 [2024-04-26 09:04:11.851612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.852161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.852182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.852619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.853084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.853102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.853596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.853985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.854003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.854449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.854833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.854850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.855345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.855620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.855638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.856077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.856505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.856522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.856959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.857391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.857407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.857933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.858423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.858440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.858649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.859174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.859198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.859695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.860178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.860195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.860649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.861110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.861126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.861587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.861989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.862005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.862518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.862989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.863005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.863432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.863949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.863966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.864525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.865033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.865050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.865565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.865999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.866015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.866524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.866903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.866919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.867351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.867758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.867775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.868261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.868769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.868785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.869221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.869671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.869687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.817 [2024-04-26 09:04:11.870185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.870642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.817 [2024-04-26 09:04:11.870661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.817 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.871134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.871557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.871573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.872045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.872554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.872571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.872936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.873315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.873331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.873778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.874144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.874160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.874584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.875024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.875040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.875471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.875953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.875970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.876494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.876975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.876991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.877415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.877793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.877810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.878186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.878672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.878689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.879108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.879558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.879574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.879991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.880412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.880429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.880846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.881369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.881385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.881869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.882020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.882037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.882523] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:29:54.818 [2024-04-26 09:04:11.882544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.882577] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.818 [2024-04-26 09:04:11.883027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.883045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.884119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.884513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.884534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.885001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.885153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.885170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.885603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.886108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.886124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.886617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.887046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.887064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.887529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.887796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.887814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.888255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.888739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.888756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.889267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.889747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.889763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.890270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.890775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.890791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.891255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.891629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.891646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.892182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.892427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.892443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.892887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.893301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.893317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.893704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.894078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.894095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.894538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.895039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.895055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.895419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.895793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.895809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.896192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.896640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.896657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.897020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.897498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.897515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.898563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.898949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.898968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.899444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.899874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.899891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.900331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.901510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.901540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.901983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.902909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.902938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.903461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.904575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.904608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.905088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.906088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.906118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.907134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.907535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.907555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.907927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.908346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.908363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.908821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.909039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.909058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.818 qpair failed and we were unable to recover it. 00:29:54.818 [2024-04-26 09:04:11.909427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.818 [2024-04-26 09:04:11.909895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.909912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.910395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.910815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.910833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.911247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.911747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.911763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.912244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.912621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.912637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.912992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.913489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.913506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.914001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.914439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.914464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.914828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.915255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.915271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.915612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.916061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.916077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.916461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.916887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.916903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.917240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.917720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.917736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.918182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.919358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.919386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.919910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.920387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.920403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.920863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.819 [2024-04-26 09:04:11.921229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.921246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.921461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.921879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.921895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.922301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.922717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.922734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.923103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.923540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.923557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.923970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.924405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.924420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.924935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.925369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.925384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.925821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.926187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.926203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.926587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.927006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.927022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.927509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.927929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.927945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.928364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.928785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.928802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.929253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.929678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.929694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.930107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.930466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.930483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.930889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.931330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.931346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.931753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.932214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.932230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.932679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.933031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.933047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.933528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.933962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.933978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.934417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.934964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.934980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.935395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.935875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.935891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.936051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.936461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.936477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.936860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.937287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.937302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.937725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.938100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.938116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.819 qpair failed and we were unable to recover it. 00:29:54.819 [2024-04-26 09:04:11.938545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.938994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.819 [2024-04-26 09:04:11.939009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.939487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.939909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.939925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.940347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.940825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.940841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.941267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.941722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.941738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.942239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.942606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.942622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.942793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.943157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.943173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.943616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.943919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.943935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.944377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.944803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.944820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.945258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.945700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.945716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.946144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.946579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.946594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.946955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.947304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.947319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.947742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.948026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.948042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.948473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.948834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.948849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.949020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.949470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.949486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.949851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.950284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.950300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.950729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.951143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.951159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.951603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.951788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.951804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.952142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.952524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.952540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.952982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.953465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.953481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.953961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.954200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.954215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.954600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.954969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.954985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.955388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.955886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.955903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.956318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.956733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.956749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.957069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.957520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.957537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.957917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.958401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.958416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.958879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.959369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.959385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.959851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.960294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.960309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.820 [2024-04-26 09:04:11.960610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.960972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.820 [2024-04-26 09:04:11.960990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.820 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.961438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.961887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.961903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.962401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.962828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.962844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.963278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.963676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.963693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.964123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.964541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.964557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.965005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.965280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.965295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.965724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.966196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.966212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.966713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.967198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.967213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.967663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.968074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.968090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.968534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.968878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.968894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.969345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.969836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.969852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.970349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.970836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.970852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.971333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.971861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.971878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.972347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.972782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.972798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.973227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.973684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.973700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.974119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.974596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.974612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.975128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.975130] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:54.821 [2024-04-26 09:04:11.975625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.975641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.976120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.976652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.976668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.977123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.977573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.977589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.978066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.978550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.978567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.979095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.979638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.979657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.980137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.980639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.980656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.981123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.981625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.981642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.982174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.982648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.982666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.983100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.983476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.983494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.983911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.984344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.984362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.984855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.985309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.821 [2024-04-26 09:04:11.985327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.821 qpair failed and we were unable to recover it. 00:29:54.821 [2024-04-26 09:04:11.985832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.986258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.986274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.986780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.987160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.987175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.987680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.988121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.988137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.988639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.989109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.989125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.989641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.990115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.990131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.990582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.991099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.991115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.991639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.991997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.992013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.992537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.993003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.993020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.993489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.993964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.993981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.994510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.995068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.995086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.995551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.995973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.995990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.996433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.996898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.996915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.997356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.997718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.997735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.998211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.998679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.998695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:11.999125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.999627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:11.999644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:12.000107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.000520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.000537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:12.000913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.001413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.001429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:12.001973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.002430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.002446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:12.002952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.003457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.003474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:12.003914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.004282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.004298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:12.004841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.005289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.005304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:12.005827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.006266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.006282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:12.006786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.007270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.007287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:12.007796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.008174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.008190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.822 qpair failed and we were unable to recover it. 00:29:54.822 [2024-04-26 09:04:12.008694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.822 [2024-04-26 09:04:12.009172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.009188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.009721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.010250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.010265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.010749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.011238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.011254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.011761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.012262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.012284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.012827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.013263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.013282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.013738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.014243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.014261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.014841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.015340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.015357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.015884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.016313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.016330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.016754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.017182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.017199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.017682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.018169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.018185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.018562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.019013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.019030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.019528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.019899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.019917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.020426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.020935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.020953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.021382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.021817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.021834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.022262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.022761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.022777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.023155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.023655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.023671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.024172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.024698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.024714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.025178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.025609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.025625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.025991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.026566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.026582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.823 qpair failed and we were unable to recover it. 00:29:54.823 [2024-04-26 09:04:12.027014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.027589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.823 [2024-04-26 09:04:12.027605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.028063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.028536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.028556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.028930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.029411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.029427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.029940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.030471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.030487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.030936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.031434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.031456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.031929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.032461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.032477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.032975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.033501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.033517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.033952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.034380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.034396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.034817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.035245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.035261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.035680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.036105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.036121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.036621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.037067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.037083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.037544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.037993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.038008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.038566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.039015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.039031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.039540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.039927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.039943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.040370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.040838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.040855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.041344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.041856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.041872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.042371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.042869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.042886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.043320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.043738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.043755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.044254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.044674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.044691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.045171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.045616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.045634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.046013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.046109] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.824 [2024-04-26 09:04:12.046145] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.824 [2024-04-26 09:04:12.046155] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.824 [2024-04-26 09:04:12.046164] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.824 [2024-04-26 09:04:12.046171] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.824 [2024-04-26 09:04:12.046289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:54.824 [2024-04-26 09:04:12.046530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.046400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:54.824 [2024-04-26 09:04:12.046510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:54.824 [2024-04-26 09:04:12.046547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.046511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:54.824 [2024-04-26 09:04:12.047046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.047483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.047500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.047935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.048425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.048441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.048827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.049308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.049323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.824 qpair failed and we were unable to recover it. 00:29:54.824 [2024-04-26 09:04:12.049858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.824 [2024-04-26 09:04:12.050347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.050363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-04-26 09:04:12.050879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.051306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.051321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-04-26 09:04:12.051850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.052212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.052228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-04-26 09:04:12.052747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.053288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.053304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-04-26 09:04:12.053831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.054260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.054276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-04-26 09:04:12.054778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.055254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.055271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-04-26 09:04:12.055784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.056232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.056249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:54.825 [2024-04-26 09:04:12.056710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.057160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.825 [2024-04-26 09:04:12.057176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:54.825 qpair failed and we were unable to recover it. 00:29:55.090 [2024-04-26 09:04:12.057668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.090 [2024-04-26 09:04:12.058112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.090 [2024-04-26 09:04:12.058129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.090 qpair failed and we were unable to recover it. 00:29:55.090 [2024-04-26 09:04:12.058607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.090 [2024-04-26 09:04:12.058987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.090 [2024-04-26 09:04:12.059003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.090 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.059434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.059842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.059860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.060387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.060889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.060906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.061354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.061842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.061859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.062244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.062673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.062690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.063174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.063699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.063717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.064097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.064598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.064615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.065009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.065577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.065595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.066065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.066585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.066601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.067105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.067630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.067648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.068180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.068694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.068711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.069084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.069564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.069580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.070087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.070519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.070537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.070962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.071399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.071415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.071964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.072518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.072535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.073037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.073458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.073476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.073931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.074358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.074375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.074851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.075302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.075319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.075794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.076268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.076284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.076752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.077202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.077218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.077646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.078055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.078071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.078505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.078921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.078938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.079460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.079840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.079859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.080250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.080793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.080815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.081211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.081589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.081610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.082015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.082528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.082546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.082922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.083399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.083415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.083963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.084412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.084428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.084867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.085400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.085416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.091 qpair failed and we were unable to recover it. 00:29:55.091 [2024-04-26 09:04:12.085852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.091 [2024-04-26 09:04:12.086253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.086268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.086754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.087231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.087249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.087631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.088131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.088147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.088608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.088991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.089007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.089514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.089893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.089909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.090384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.090881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.090898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.091387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.091905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.091921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.092382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.092748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.092764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.093161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.093665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.093682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.094121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.094620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.094637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.095024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.095476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.095494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.095927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.096364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.096382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.096762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.097264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.097281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.097797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.098227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.098243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.098616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.098987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.099004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.099518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.099979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.099995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.100570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.101038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.101057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.101554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.101975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.101993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.102431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.102880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.102897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.103363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.103784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.103801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.104238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.104651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.104667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.105148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.105608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.105624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.105985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.106428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.106444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.106878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.107306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.107322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.107766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.108147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.108163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.108667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.109142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.109158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.109647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.110146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.110161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.110644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.111079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.111094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.092 qpair failed and we were unable to recover it. 00:29:55.092 [2024-04-26 09:04:12.111461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.111900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.092 [2024-04-26 09:04:12.111916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.112348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.112771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.112787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.113167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.113506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.113522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.113963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.114411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.114427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.114890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.115344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.115359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.115814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.116317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.116333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.116836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.117195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.117211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.117720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.118222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.118238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.118735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.119260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.119275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.119738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.120242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.120257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.120625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.121063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.121082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.121582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.122014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.122029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.122555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.122928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.122943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.123465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.123880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.123895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.124316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.124746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.124762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.125204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.125732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.125748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.126135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.126547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.126563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.126920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.127407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.127422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.127885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.128346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.128362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.128822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.129295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.129310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.129700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.130196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.130214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.130727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.131204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.131219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.131761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.132298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.132313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.132814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.133243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.133259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.133743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.134212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.134227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.134767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.135197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.135213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.135629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.136080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.136095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.136530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.136979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.136995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.137453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.137878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.093 [2024-04-26 09:04:12.137894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.093 qpair failed and we were unable to recover it. 00:29:55.093 [2024-04-26 09:04:12.138246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.138740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.138756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.139114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.139640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.139660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.140193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.140723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.140739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.141204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.141725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.141741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.142176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.142644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.142661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.143054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.143545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.143561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.143979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.144463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.144479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.144852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.145377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.145393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.145856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.146275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.146290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.146832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.147250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.147266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.147740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.148184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.148200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.148631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.149086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.149105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.149608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.150019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.150035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.150466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.150986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.151002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.151503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.151934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.151950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.152329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.152825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.152841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.153261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.153766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.153782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.154215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.154717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.154733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.155227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.155745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.155761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.156188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.156702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.156718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.157151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.157563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.157579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.158033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.158472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.158488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.158992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.159480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.159497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.159925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.160457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.160473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.160843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.161269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.161285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.161714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.162192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.162208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.162644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.163145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.163160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.163661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.164043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.164059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.094 qpair failed and we were unable to recover it. 00:29:55.094 [2024-04-26 09:04:12.164598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.164977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.094 [2024-04-26 09:04:12.164992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.165544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.165970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.165985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.166521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.166988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.167004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.167478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.167930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.167946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.168407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.168891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.168907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.169467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.169848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.169864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.170241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.170674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.170690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.171080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.171501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.171518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.171941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.172443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.172464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.173012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.173546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.173563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.173946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.174321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.174336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.174837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.175186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.175201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.175650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.176075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.176090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.176593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.177068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.177084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.177535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.177988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.178004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.178542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.179043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.179059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.179606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.180031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.180047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.180574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.181012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.181028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.181442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.181879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.181894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.182410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.182884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.182901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.183342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.183782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.183798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.184216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.184685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.184702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.185134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.185674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.185690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.186057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.186546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.186561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.095 qpair failed and we were unable to recover it. 00:29:55.095 [2024-04-26 09:04:12.187015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.095 [2024-04-26 09:04:12.187516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.187532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.188021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.188553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.188570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.189071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.189497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.189513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.189992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.190430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.190446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.190892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.191420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.191435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.191886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.192262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.192278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.192782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.193183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.193199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.193601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.194094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.194109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.194545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.194974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.194990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.195427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.195921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.195937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.196485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.196947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.196962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.197419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.197789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.197806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.198218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.198689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.198705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.199145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.199574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.199590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.200093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.200519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.200535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.200990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.201446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.201468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.201891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.202301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.202316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.202804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.203246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.203261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.203762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.204070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.204088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.204576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.205000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.205015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.205513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.205944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.205960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.206488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.206977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.206993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.207515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.208013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.208029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.208571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.209046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.209062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.209586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.210085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.210101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.210555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.210924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.210940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.211437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.211946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.211962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.212373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.212872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.212888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.096 [2024-04-26 09:04:12.213330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.213739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.096 [2024-04-26 09:04:12.213755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.096 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.214133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.214648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.214665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.215151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.215654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.215670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.216055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.216559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.216575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.217073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.217529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.217545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.218025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.218562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.218578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.218956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.219392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.219408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.219955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.220384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.220399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.220770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.221268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.221284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.221737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.222168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.222184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.222646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.223121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.223137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.223613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.224125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.224141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.224651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.225030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.225046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.225464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.225947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.225962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.226409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.226859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.226875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.227384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.227861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.227877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.228311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.228807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.228823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.229299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.229740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.229756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.230118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.230575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.230591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.231089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.231596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.231611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.232000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.232494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.232510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.233062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.233561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.233577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.234013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.234501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.234517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.235036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.235547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.235563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.236031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.236534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.236550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.237068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.237444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.237470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.237918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.238346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.238362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.238796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.239220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.239235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.239735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.240167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.240183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.097 qpair failed and we were unable to recover it. 00:29:55.097 [2024-04-26 09:04:12.240594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.097 [2024-04-26 09:04:12.240971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.240986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.241435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.241818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.241833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.242314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.242798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.242814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.243194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.243613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.243629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.243993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.244403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.244419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.244874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.245248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.245264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.245771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.246140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.246156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.246590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.246945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.246961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.247490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.248005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.248021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.248549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.249062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.249077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.249588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.249968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.249983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.250465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.250902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.250918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.251302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.251773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.251790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.252222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.252655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.252671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.253101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.253477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.253493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.253924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.254420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.254436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.254872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.255307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.255322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.255828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.256235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.256251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.256707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.257149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.257165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.257644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.258043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.258059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.258442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.258818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.258834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.259259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.259422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.259438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.259897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.260261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.260277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.260652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.261145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.261161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.261514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.261943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.261959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.262465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.262645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.262660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.263081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.263504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.263520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.263928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.264414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.264430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.264845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.265269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.265285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.098 qpair failed and we were unable to recover it. 00:29:55.098 [2024-04-26 09:04:12.265649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.098 [2024-04-26 09:04:12.266145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.266161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.266593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.266940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.266956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.267404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.267748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.267764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.268127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.268283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.268298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.268728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.269168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.269186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.269667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.270037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.270052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.270418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.270895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.270912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.271343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.271755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.271771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.272251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.272672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.272688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.273165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.273603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.273618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.274095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.274586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.274602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.275032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.275468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.275484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.275964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.276362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.276378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.276816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.277159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.277175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.277680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.278020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.278038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.278453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.278808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.278824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.279175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.279598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.279614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.279855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.280332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.280347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.280783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.281185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.281200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.281563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.281930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.281946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.282427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.282851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.282867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.283372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.283868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.283884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.284333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.284808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.284824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.285300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.285737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.285753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.286251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.286696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.286714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.287088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.287584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.287599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.288017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.288442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.288465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.288903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.289422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.289437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.289979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.290385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.099 [2024-04-26 09:04:12.290401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.099 qpair failed and we were unable to recover it. 00:29:55.099 [2024-04-26 09:04:12.290901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.291271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.291287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.291715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.292130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.292146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.292624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.293102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.293117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.293618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.293951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.293966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.294443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.294857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.294873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.295226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.295727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.295745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.296248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.296680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.296696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.297077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.297549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.297565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.298013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.298423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.298438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.298870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.299362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.299378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.299878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.300309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.300324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.300803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.301277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.301292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.301723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.302222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.302237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.302607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.303085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.303100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.303581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.304014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.304030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.304488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.304904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.304919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.305420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.305917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.305933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.306435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.306858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.306874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.307370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.307795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.307811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.308311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.308735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.308750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.309181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.309599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.309615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.310093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.310519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.310535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.311032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.311494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.311510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.311882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.312238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.312253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.100 qpair failed and we were unable to recover it. 00:29:55.100 [2024-04-26 09:04:12.312666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.313161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.100 [2024-04-26 09:04:12.313176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.313677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.314104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.314120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.314625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.315101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.315116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.315522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.316025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.316041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.316520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.316968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.316984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.317483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.317853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.317869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.318349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.318766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.318783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.319266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.319762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.319777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.320197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.320618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.320634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.321137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.321614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.321629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.322150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.322629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.322645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.323123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.323270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.323285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.323699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.324175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.324191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.324690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.325211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.325227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.325668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.326164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.326180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.326611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.327074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.327090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.327568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.327990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.328006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.328417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.328890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.328906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.329311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.329716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.101 [2024-04-26 09:04:12.329732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.101 qpair failed and we were unable to recover it. 00:29:55.101 [2024-04-26 09:04:12.330251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.330674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.330690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.331188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.331664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.331680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.332167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.332642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.332658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.333101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.333543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.333559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.333970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.334385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.334401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.334879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.335285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.335300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.335664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.336159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.336174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.336601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.337025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.337041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.337568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.337988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.338003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.338501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.338850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.338866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.339348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.339819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.339834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.340214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.340686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.340702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.341069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.341508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.341524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.342005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.342498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.342514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.342945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.343440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.343461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.343909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.344315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.344331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.344777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.345122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.345138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.367 [2024-04-26 09:04:12.345545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.346001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.367 [2024-04-26 09:04:12.346016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.367 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.346446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.346947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.346963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.347320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.347722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.347738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.348173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.348603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.348619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.349097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.349589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.349604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.350009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.350509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.350525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.351007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.351428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.351444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.351885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.352312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.352327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.352828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.353304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.353319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.353849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.354251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.354266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.354768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.355259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.355274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.355752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.356223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.356239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.356741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.357264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.357280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.357770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.358178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.358194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.358680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.359131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.359146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.359627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.360054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.360069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.360523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.361039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.361061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.361537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.362012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.362030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.362486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.362914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.362930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.363410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.363830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.363846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.364263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.364759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.364775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.365301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.365744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.365762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.366209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.366640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.366657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.367030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.367444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.367466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.368 [2024-04-26 09:04:12.367895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.368369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.368 [2024-04-26 09:04:12.368385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.368 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.368837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.369307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.369322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.369777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.370255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.370271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.370638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.371058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.371074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.371292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.371766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.371782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.372209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.372637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.372653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.373201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.373696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.373712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.374187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.374612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.374637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.375057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.375549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.375565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.375921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.376405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.376421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.376960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.377173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.377189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.377670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.378096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.378111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.378591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.379079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.379097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.379446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.379952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.379967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.380351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.380780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.380796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.381285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.381784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.381800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.382277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.382746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.382762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.383266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.383626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.383642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.384069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.384483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.384499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.384988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.385497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.385513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.385952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.386400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.386415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.386926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.387301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.387316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.387834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.388237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.388255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.388602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.389097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.389113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.369 qpair failed and we were unable to recover it. 00:29:55.369 [2024-04-26 09:04:12.389595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.369 [2024-04-26 09:04:12.390020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.390036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.390476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.390994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.391009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.391431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.391910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.391926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.392406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.392806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.392823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.393328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.393766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.393782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.394275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.394723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.394739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.395172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.395580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.395596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.396015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.396424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.396439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.396946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.397371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.397386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.397828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.398231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.398246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.398769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.399238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.399253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.399702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.400132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.400148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.400567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.400736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.400751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.401262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.401602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.401618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.402122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.402505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.402521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.403024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.403389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.403405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.403904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.404407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.404423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.404873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.405323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.405338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.405785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.405997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.406012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.406492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.406918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.406934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.407296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.407646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.407662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.408099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.408596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.370 [2024-04-26 09:04:12.408612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.370 qpair failed and we were unable to recover it. 00:29:55.370 [2024-04-26 09:04:12.409114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.409619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.409635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.410136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.410611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.410627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.411161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.411661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.411677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.412101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.412573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.412589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.412962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.413499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.413515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.414018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.414490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.414506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.415040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.415538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.415554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.416047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.416483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.416499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.416662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.417160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.417176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.417654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.418174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.418190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.418619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.419048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.419064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.419443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.419874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.419889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.420058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.420489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.420505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.420933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.421366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.421382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.421888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.422382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.422397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.422826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.423315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.423331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.423747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.424239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.424254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.424658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.425153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.425169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.425604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.426100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.426115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.426617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.427093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.427108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.427542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.428027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.428043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.428494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.428870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.428885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.371 [2024-04-26 09:04:12.429382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.429878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.371 [2024-04-26 09:04:12.429894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.371 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.430304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.430793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.430810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.431312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.431786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.431802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.432239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.432723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.432738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.433240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.433592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.433608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.433823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.434233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.434252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.434736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.435190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.435206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.435645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.436144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.436160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.436664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.437163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.437179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.437653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.438174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.438189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.438539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.439038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.439054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.439482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.439978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.439993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.440416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.440845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.440862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.441294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.441700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.441716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.442216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.442709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.442725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.443233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.443661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.443677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.444118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.444473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.444488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.444984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.445418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.445434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.445943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.446418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.446434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.372 qpair failed and we were unable to recover it. 00:29:55.372 [2024-04-26 09:04:12.446896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.447369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.372 [2024-04-26 09:04:12.447384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.447812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.448306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.448322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.448743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.449175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.449190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.449627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.450074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.450089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.450590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.451014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.451030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.451258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.451725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.451741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.452241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.452689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.452704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.453132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.453606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.453622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.454100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.454596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.454612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.455049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.455546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.455562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.456063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.456543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.456559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.457007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.457479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.457495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.458021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.458500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.458516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.458944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.459414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.459430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.459909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.460407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.460423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.460953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.461435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.461454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.461877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.462361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.462376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.462876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.463397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.463413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.463917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.464391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.464407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.464855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.465294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.465310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.465787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.466264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.466279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.466794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.467239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.467254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.467684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.468156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.468171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.468670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.469144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.469160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.469647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.470141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.470157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.470569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.471057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.471073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.373 qpair failed and we were unable to recover it. 00:29:55.373 [2024-04-26 09:04:12.471502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.373 [2024-04-26 09:04:12.471884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.471900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.472379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.472795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.472811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.473293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.473764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.473780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.473998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.474436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.474455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.474954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.475454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.475469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.475946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.476285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.476300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.476724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.477221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.477237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.477684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.478083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.478098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.478573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.479083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.479099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.479617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.480108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.480124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.480625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.481041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.481057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.481560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.482001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.482019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.482382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.482851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.482867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.483392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.483854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.483870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.484395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.484887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.484903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.485401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.485821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.485837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.486340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.486860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.486876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.487365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.487856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.487873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.488398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.488825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.488841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.489346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.489819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.489835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.490321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.490665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.490681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.491044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.491553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.491569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.492072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.492492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.492507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.374 [2024-04-26 09:04:12.492935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.493367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.374 [2024-04-26 09:04:12.493382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.374 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.493758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.494227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.494242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.494720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.495127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.495143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.495570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.495973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.495989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.496489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.496987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.497002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.497383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.497857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.497873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.498350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.498819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.498834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.499268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.499705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.499721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.500136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.500633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.500649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.501153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.501660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.501676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.502181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.502627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.502643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.503146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.503621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.503637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.504163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.504668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.504684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.505107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.505599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.505615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.506092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.506520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.506537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.506999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.507328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.507343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.507768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.508266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.508281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.508718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.509143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.509159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.509589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.510022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.510037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.510463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.510934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.510949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.511431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.511877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.511893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.512345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.512840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.512856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.513362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.513864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.513880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.514358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.514799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.514815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.515264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.515686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.515703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.375 [2024-04-26 09:04:12.516063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.516538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.375 [2024-04-26 09:04:12.516554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.375 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.517059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.517411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.517427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.517906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.518310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.518325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.518500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.518994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.519010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.519515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.519967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.519983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.520489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.521002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.521017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.521249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.521608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.521624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.522114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.522604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.522619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.523122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.523633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.523649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.523869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.524365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.524382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.524811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.525022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.525037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.525459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.525932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.525948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.526476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.526953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.526968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.527402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.527900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.527916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.528420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.528930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.528949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.529361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.529778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.529794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.530279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.530701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.530717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.531135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.531608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.531624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.532060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.532545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.532561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.533063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.533222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.533238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.533688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.534173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.534189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.534563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.535081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.535097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.376 qpair failed and we were unable to recover it. 00:29:55.376 [2024-04-26 09:04:12.535588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.536085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.376 [2024-04-26 09:04:12.536100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.536518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.536729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.536745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.537175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.537676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.537692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.538132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.538600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.538615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.539075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.539590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.539606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.540016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.540354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.540369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.540719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.541066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.541082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.541517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.541941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.541957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.542436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.542925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.542941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.543351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.543835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.543851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.544206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.544704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.544720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.545246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.545731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.545747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.546182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.546657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.546672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.547105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.547536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.547552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.548028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.548446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.548465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.548968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.549409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.549424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.549854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.550351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.550366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.550869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.551114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.551130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.551630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.552080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.552096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.552468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.552881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.552897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.553375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.553809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.553825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.554255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.554673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.554689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.555120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.555617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.555633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.556135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.556540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.556556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.556780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.557251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.557266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.557698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.558198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.558214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.377 qpair failed and we were unable to recover it. 00:29:55.377 [2024-04-26 09:04:12.558715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.559119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.377 [2024-04-26 09:04:12.559134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.559569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.560064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.560080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.560491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.560732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.560748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.561171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.561575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.561591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.562088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.562514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.562530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.563005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.563423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.563438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.563856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.564332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.564348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.564817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.565325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.565340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.565758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.565928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.565944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.566446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.566922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.566938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.567369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.567538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.567554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.568035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.568474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.568490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.568991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.569508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.569524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.570022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.570454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.570470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.570967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.571480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.571496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.571927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.572364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.572380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.572799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.573246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.573261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.573696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.574195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.574214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.574658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.575080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.575095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.575546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.576046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.378 [2024-04-26 09:04:12.576061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.378 qpair failed and we were unable to recover it. 00:29:55.378 [2024-04-26 09:04:12.576538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.577012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.577028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.577525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.577885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.577900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.578071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.578552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.578568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.578992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.579421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.579437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.579942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.580437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.580462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.580973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.581488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.581505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.581996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.582398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.582413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.582901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.583403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.583421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.583925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.584368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.584383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.584864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.585267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.585282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.585652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.586144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.586159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.586409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.586905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.586921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.587428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.587856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.587872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.588284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.588717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.588733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.589170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.589588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.589604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.590128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.590627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.590643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.591049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.591562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.591578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.592060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.592513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.592529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.593035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.593511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.593528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.594010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.594456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.594472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.594902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.595383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.595398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.595551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.596053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.596069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.596442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.596871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.596887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.597394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.597888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.597913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.598302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.598756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.598777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.599026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.599536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.599553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.379 qpair failed and we were unable to recover it. 00:29:55.379 [2024-04-26 09:04:12.600039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.379 [2024-04-26 09:04:12.600390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.600406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.380 qpair failed and we were unable to recover it. 00:29:55.380 [2024-04-26 09:04:12.600890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.601254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.601269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.380 qpair failed and we were unable to recover it. 00:29:55.380 [2024-04-26 09:04:12.601726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.601962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.601978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.380 qpair failed and we were unable to recover it. 00:29:55.380 [2024-04-26 09:04:12.602408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.602922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.602938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.380 qpair failed and we were unable to recover it. 00:29:55.380 [2024-04-26 09:04:12.603280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.603522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.603537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.380 qpair failed and we were unable to recover it. 00:29:55.380 [2024-04-26 09:04:12.603892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.604386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.604401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.380 qpair failed and we were unable to recover it. 00:29:55.380 [2024-04-26 09:04:12.604818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.605169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.605185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.380 qpair failed and we were unable to recover it. 00:29:55.380 [2024-04-26 09:04:12.605592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.606011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.380 [2024-04-26 09:04:12.606026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.380 qpair failed and we were unable to recover it. 00:29:55.647 [2024-04-26 09:04:12.606460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.606886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.606902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.647 qpair failed and we were unable to recover it. 00:29:55.647 [2024-04-26 09:04:12.607334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.607859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.607875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.647 qpair failed and we were unable to recover it. 00:29:55.647 [2024-04-26 09:04:12.608300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.608644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.608660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.647 qpair failed and we were unable to recover it. 00:29:55.647 [2024-04-26 09:04:12.609139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.609656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.609672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.647 qpair failed and we were unable to recover it. 00:29:55.647 [2024-04-26 09:04:12.610200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.610621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.610637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.647 qpair failed and we were unable to recover it. 00:29:55.647 [2024-04-26 09:04:12.611066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.611486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.611502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.647 qpair failed and we were unable to recover it. 00:29:55.647 [2024-04-26 09:04:12.611938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.612344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.612359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.647 qpair failed and we were unable to recover it. 00:29:55.647 [2024-04-26 09:04:12.612830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.613275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.613291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.647 qpair failed and we were unable to recover it. 00:29:55.647 [2024-04-26 09:04:12.613722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.614198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.614214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.647 qpair failed and we were unable to recover it. 00:29:55.647 [2024-04-26 09:04:12.614568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.614915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.647 [2024-04-26 09:04:12.614931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.647 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.615360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.615815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.615831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.616279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.616687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.616703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.617144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.617570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.617586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.617748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.618250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.618266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.618735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.619260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.619276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.619769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.620190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.620205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.620638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.620995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.621011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.621539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.621981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.621996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.622330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.622688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.622704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.623185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.623661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.623677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.624179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.624699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.624715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.625149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.625628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.625644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.626148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.626585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.626601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.627050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.627474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.627490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.627991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.628387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.628409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.628837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.629312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.629328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.629854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.630093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.630108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.630511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.630953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.630968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.631469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.631970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.631986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.632417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.632899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.632915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.633351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.633826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.633841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.634249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.634674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.634690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.635124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.635598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.635614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.636118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.636557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.636573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.637008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.637408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.637424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.637917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.638343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.638358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.638837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.639063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.639079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.639493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.639852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.639867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.648 qpair failed and we were unable to recover it. 00:29:55.648 [2024-04-26 09:04:12.640297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.648 [2024-04-26 09:04:12.640653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.640669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.641168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.641593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.641609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.642110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.642540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.642556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.642971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.643204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.643219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.643654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.644071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.644087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.644603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.645012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.645028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.645529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.646040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.646055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.646486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.646937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.646953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.647455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.647957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.647972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.648478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.648881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.648897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.649357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.649594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.649610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.650115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.650292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.650308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.650651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.651150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.651166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.651668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.652172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.652188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.652667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.653033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.653049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.653549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.653903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.653919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.654292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.654727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.654743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.655225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.655700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.655716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.656140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.656365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.656381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.656861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.657335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.657351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.657716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.658147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.658163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.658512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.658985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.659001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.659445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.659920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.659935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.660462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.660963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.660979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.661358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.661573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.661589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.662041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.662513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.662529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.662950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.663444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.663465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.663908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.664383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.664398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.664900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.665298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.649 [2024-04-26 09:04:12.665314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.649 qpair failed and we were unable to recover it. 00:29:55.649 [2024-04-26 09:04:12.665738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.666234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.666250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.666598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.667098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.667113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.667549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.668048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.668063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.668540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.669002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.669017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.669432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.669927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.669943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.670467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.670939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.670955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.671435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.671856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.671872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.672369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.672781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.672797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.673275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.673705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.673724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.674156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.674597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.674613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.675091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.675568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.675583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.676013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.676504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.676520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.676968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.677457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.677473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.677900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.678378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.678393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.678817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.679303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.679318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.679823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.680333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.680349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.680829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.681347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.681362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.681792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.682222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.682237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.682716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.683076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.683091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.683368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.683843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.683860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.684359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.684889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.684905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.685390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.685748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.685764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.686177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.686445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.686465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.686831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.687307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.687322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.687830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.688281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.688297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.688804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.689276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.689292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.689799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.690315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.690331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.690831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.691341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.691356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.650 [2024-04-26 09:04:12.691805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.692305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.650 [2024-04-26 09:04:12.692320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.650 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.692759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.693214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.693230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.693470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 09:04:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:55.651 [2024-04-26 09:04:12.693952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.693968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 09:04:12 -- common/autotest_common.sh@850 -- # return 0 00:29:55.651 [2024-04-26 09:04:12.694394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 09:04:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:55.651 [2024-04-26 09:04:12.694793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.694809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 09:04:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:55.651 09:04:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.651 [2024-04-26 09:04:12.695226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.695645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.695661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.696008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.696442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.696468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.696969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.697468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.697484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.697894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.698315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.698330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.698813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.699289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.699305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.699810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.700335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.700351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.700770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.701215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.701232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.701737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.702152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.702167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.702645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.702887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.702902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.703402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.703830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.703846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.704361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.704807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.704823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.705012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.705284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.705300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.705713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.706068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.706084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.706587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.706943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.706959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.707320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.707803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.707819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.708297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.708771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.708787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.709235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.709472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.709491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.709795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.710232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.710248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.710678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.711146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.711162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.711644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.712064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.712080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.712531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.713052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.713068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.713572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.714056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.714072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.714563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.714987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.715003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.715369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.715821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.715837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.651 [2024-04-26 09:04:12.716291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.716700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.651 [2024-04-26 09:04:12.716717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.651 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.717082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.717560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.717577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.718057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.718357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.718372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.718730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.719146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.719162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.719592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.719944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.719959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.720403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.720752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.720768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.721247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.721658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.721674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.722110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.722608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.722624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.723125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.723602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.723618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.723975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.724340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.724356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.724705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.725123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.725138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.725561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.725946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.725961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.726384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.726569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.726585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.726939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.727354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.727370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.727786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.728257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.728273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.728686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.729135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.729150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.729582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.729993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.730009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.730419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.730916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.730932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.731299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.731720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.731736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.732173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.732588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.732604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.732964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.733386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.733402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.733761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.734175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.734191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.734695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.735000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.735016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2244780 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.735571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.736041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.736060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.736485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.736982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.652 [2024-04-26 09:04:12.736998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.652 qpair failed and we were unable to recover it. 00:29:55.652 [2024-04-26 09:04:12.737456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.737869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.737885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.738335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.738739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.738756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.739175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 09:04:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.653 [2024-04-26 09:04:12.739645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.739663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 09:04:12 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:55.653 [2024-04-26 09:04:12.740021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 09:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.653 09:04:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.653 [2024-04-26 09:04:12.740495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.740512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.740922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.741274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.741290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.741701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.742097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.742113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.742521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.742924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.742940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.743417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.743657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.743674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.744122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.744484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.744501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.744853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.745277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.745292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.745771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.746123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.746139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.746549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.746953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.746968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.747392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.747843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.747860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.748359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.748758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.748774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.748924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.749274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.749289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.749654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.750071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.750087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.750566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.751011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.751028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.751457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.751802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.751826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.752275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.752756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.752774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.753303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.753662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.753680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.754169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.754624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.754642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.755151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.755598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.755615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.756165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.756585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.756602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.757033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.757463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.757479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 Malloc0 00:29:55.653 [2024-04-26 09:04:12.757972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 09:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.653 [2024-04-26 09:04:12.758394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.758411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 09:04:12 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:55.653 [2024-04-26 09:04:12.758574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 09:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.653 09:04:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.653 [2024-04-26 09:04:12.759025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.759041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.759552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.759979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.653 [2024-04-26 09:04:12.759995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.653 qpair failed and we were unable to recover it. 00:29:55.653 [2024-04-26 09:04:12.760355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.760778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.760795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.761279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.761704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.761721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.762082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.762484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.762500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.762852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.763263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.763278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.763717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.764053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.764068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.764545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.764960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.764976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.765128] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.654 [2024-04-26 09:04:12.765460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.765962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.765979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.766500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.766976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.766992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.767401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.767836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.767852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.768262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.768760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.768779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.769228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.769652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.769668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.770098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.770510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.770527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.770942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.771371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.771387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.771846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.772279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.772294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.772470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.772942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.772958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.773473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 09:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.654 [2024-04-26 09:04:12.773919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.773935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 09:04:12 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:55.654 [2024-04-26 09:04:12.774460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 09:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.654 09:04:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.654 [2024-04-26 09:04:12.774946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.774962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.775406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.775906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.775922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.776334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.776807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.776823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.777316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.777741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.777758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.778206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.778604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.778620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.779045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.779480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.779496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.779925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.780291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.780307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.780546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.780793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.780809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 [2024-04-26 09:04:12.781240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.781665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.781681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 09:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.654 [2024-04-26 09:04:12.782120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 09:04:12 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:55.654 09:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.654 [2024-04-26 09:04:12.782593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.782609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.654 09:04:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.654 [2024-04-26 09:04:12.783025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.783465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.654 [2024-04-26 09:04:12.783481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.654 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.783988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.784482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.784498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.785014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.785515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.785531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.785951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.786423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.786439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.786927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.787339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.787354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.787859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.788284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.788299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.788799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.789295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.789311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.789673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 09:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.655 [2024-04-26 09:04:12.790196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.790213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 09:04:12 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.655 09:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.655 [2024-04-26 09:04:12.790628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 09:04:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.655 [2024-04-26 09:04:12.791104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.791120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.791604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.792037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.792052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.792474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.792873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.792889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcdb4000b90 with addr=10.0.0.2, port=4420 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.793306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.655 [2024-04-26 09:04:12.793377] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.655 [2024-04-26 09:04:12.796539] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:29:55.655 [2024-04-26 09:04:12.796589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fcdb4000b90 (107): Transport endpoint is not connected 00:29:55.655 [2024-04-26 09:04:12.796645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 09:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.655 09:04:12 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:55.655 09:04:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:55.655 09:04:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.655 [2024-04-26 09:04:12.805759] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.655 [2024-04-26 09:04:12.805920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.655 [2024-04-26 09:04:12.805944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.655 [2024-04-26 09:04:12.805956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.655 [2024-04-26 09:04:12.805966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b9 09:04:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:55.655 0 00:29:55.655 [2024-04-26 09:04:12.805997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 09:04:12 -- host/target_disconnect.sh@58 -- # wait 2237643 00:29:55.655 [2024-04-26 09:04:12.815759] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.655 [2024-04-26 09:04:12.815895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.655 [2024-04-26 09:04:12.815915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.655 [2024-04-26 09:04:12.815926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.655 [2024-04-26 09:04:12.815934] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.655 [2024-04-26 09:04:12.815954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.825691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.655 [2024-04-26 09:04:12.825829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.655 [2024-04-26 09:04:12.825849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.655 [2024-04-26 09:04:12.825860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.655 [2024-04-26 09:04:12.825868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.655 [2024-04-26 09:04:12.825888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.835715] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.655 [2024-04-26 09:04:12.835857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.655 [2024-04-26 09:04:12.835876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.655 [2024-04-26 09:04:12.835890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.655 [2024-04-26 09:04:12.835898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.655 [2024-04-26 09:04:12.835918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.845718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.655 [2024-04-26 09:04:12.845851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.655 [2024-04-26 09:04:12.845871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.655 [2024-04-26 09:04:12.845881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.655 [2024-04-26 09:04:12.845890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.655 [2024-04-26 09:04:12.845910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.855772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.655 [2024-04-26 09:04:12.855902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.655 [2024-04-26 09:04:12.855922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.655 [2024-04-26 09:04:12.855932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.655 [2024-04-26 09:04:12.855940] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.655 [2024-04-26 09:04:12.855960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.655 [2024-04-26 09:04:12.865802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.655 [2024-04-26 09:04:12.865935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.655 [2024-04-26 09:04:12.865955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.655 [2024-04-26 09:04:12.865965] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.655 [2024-04-26 09:04:12.865973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.655 [2024-04-26 09:04:12.865993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.655 qpair failed and we were unable to recover it. 00:29:55.656 [2024-04-26 09:04:12.875839] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.656 [2024-04-26 09:04:12.875977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.656 [2024-04-26 09:04:12.875997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.656 [2024-04-26 09:04:12.876007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.656 [2024-04-26 09:04:12.876016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.656 [2024-04-26 09:04:12.876035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.656 qpair failed and we were unable to recover it. 00:29:55.981 [2024-04-26 09:04:12.885856] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.981 [2024-04-26 09:04:12.885985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.981 [2024-04-26 09:04:12.886004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.981 [2024-04-26 09:04:12.886014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.981 [2024-04-26 09:04:12.886022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.981 [2024-04-26 09:04:12.886041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.981 qpair failed and we were unable to recover it. 00:29:55.981 [2024-04-26 09:04:12.895922] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.981 [2024-04-26 09:04:12.896069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.981 [2024-04-26 09:04:12.896089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.981 [2024-04-26 09:04:12.896099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.981 [2024-04-26 09:04:12.896108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.981 [2024-04-26 09:04:12.896127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.981 qpair failed and we were unable to recover it. 00:29:55.981 [2024-04-26 09:04:12.905915] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.981 [2024-04-26 09:04:12.906056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.981 [2024-04-26 09:04:12.906076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.981 [2024-04-26 09:04:12.906086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.981 [2024-04-26 09:04:12.906095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.981 [2024-04-26 09:04:12.906114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.981 qpair failed and we were unable to recover it. 00:29:55.981 [2024-04-26 09:04:12.915937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.981 [2024-04-26 09:04:12.916069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.981 [2024-04-26 09:04:12.916089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.981 [2024-04-26 09:04:12.916099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.981 [2024-04-26 09:04:12.916107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.981 [2024-04-26 09:04:12.916127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.981 qpair failed and we were unable to recover it. 00:29:55.981 [2024-04-26 09:04:12.926006] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:12.926139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:12.926165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:12.926175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:12.926183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:12.926202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:12.936037] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:12.936168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:12.936187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:12.936197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:12.936206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:12.936225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:12.946017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:12.946152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:12.946171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:12.946182] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:12.946190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:12.946210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:12.956064] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:12.956196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:12.956216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:12.956226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:12.956234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:12.956253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:12.966102] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:12.966233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:12.966252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:12.966262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:12.966271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:12.966294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:12.976346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:12.976500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:12.976520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:12.976531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:12.976539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:12.976559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:12.986133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:12.986262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:12.986282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:12.986293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:12.986301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:12.986321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:12.996177] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:12.996307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:12.996327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:12.996337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:12.996345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:12.996365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:13.006225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:13.006358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:13.006377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:13.006388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:13.006396] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:13.006416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:13.016169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:13.016347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:13.016370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:13.016380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:13.016389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:13.016408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:13.026270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:13.026615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:13.026634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:13.026643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:13.026652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:13.026671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:13.036305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:13.036439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:13.036463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:13.036474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:13.036483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:13.036503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:13.046332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:13.046471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:13.046491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:13.046501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:13.046509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:13.046528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:13.056513] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:13.056659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:13.056679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:13.056689] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:13.056698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:13.056721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:13.066422] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:13.066559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:13.066579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.982 [2024-04-26 09:04:13.066589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.982 [2024-04-26 09:04:13.066598] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.982 [2024-04-26 09:04:13.066617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.982 qpair failed and we were unable to recover it. 00:29:55.982 [2024-04-26 09:04:13.076384] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.982 [2024-04-26 09:04:13.076591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.982 [2024-04-26 09:04:13.076611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.076620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.076629] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.076648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.086485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.086619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.086639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.086650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.086658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.086677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.096431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.096602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.096621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.096631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.096640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.096659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.106488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.106629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.106648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.106658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.106667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.106686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.116528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.116659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.116678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.116688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.116697] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.116716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.126528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.126705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.126724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.126734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.126742] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.126761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.136558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.136714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.136733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.136743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.136752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.136771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.146581] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.146716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.146735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.146745] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.146757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.146777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.156624] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.156754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.156773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.156783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.156792] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.156811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.166617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.166773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.166792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.166802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.166811] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.166830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.176675] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.176831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.176850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.176860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.176869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.176888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.186737] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.186867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.186886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.186896] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.186905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.186924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.196749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.196874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.196894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.196904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.196912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.196932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.206981] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.207124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.207144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.207154] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.207162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.207181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.216813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.216941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.216960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.216970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.216979] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.216999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:55.983 [2024-04-26 09:04:13.226779] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.983 [2024-04-26 09:04:13.226925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.983 [2024-04-26 09:04:13.226944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.983 [2024-04-26 09:04:13.226954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.983 [2024-04-26 09:04:13.226963] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:55.983 [2024-04-26 09:04:13.226981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.983 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.236841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.236971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.236990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.237003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.237011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.237032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.246907] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.247049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.247069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.247079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.247087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.247106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.256892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.257021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.257040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.257050] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.257058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.257078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.266926] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.267058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.267077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.267087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.267095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.267114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.276939] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.277077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.277096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.277106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.277115] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.277134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.286902] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.287039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.287059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.287069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.287077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.287097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.296985] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.297114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.297133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.297144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.297152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.297171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.307035] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.307185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.307204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.307214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.307223] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.307242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.317003] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.317167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.317186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.317197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.317205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.317224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.327082] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.327211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.327230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.327243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.327251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.327270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.337039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.337186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.337206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.337216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.337225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.337244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.347202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.347378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.347397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.347407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.347416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.347436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.357125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.357263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.357282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.357293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.357302] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.357322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.367225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.367354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.367374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.367384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.367393] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.367412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.377239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.377389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.377408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.377418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.377426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.377445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.387262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.387395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.387414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.387424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.387432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.387459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.397324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.397468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.397488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.397498] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.397507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.397526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.407364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.407514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.407534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.407544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.407553] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.407572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.417352] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.417489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.243 [2024-04-26 09:04:13.417512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.243 [2024-04-26 09:04:13.417522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.243 [2024-04-26 09:04:13.417531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.243 [2024-04-26 09:04:13.417550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.243 qpair failed and we were unable to recover it. 00:29:56.243 [2024-04-26 09:04:13.427387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.243 [2024-04-26 09:04:13.427524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.244 [2024-04-26 09:04:13.427543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.244 [2024-04-26 09:04:13.427553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.244 [2024-04-26 09:04:13.427562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.244 [2024-04-26 09:04:13.427581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.244 qpair failed and we were unable to recover it. 00:29:56.244 [2024-04-26 09:04:13.437354] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.244 [2024-04-26 09:04:13.437492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.244 [2024-04-26 09:04:13.437512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.244 [2024-04-26 09:04:13.437522] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.244 [2024-04-26 09:04:13.437530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.244 [2024-04-26 09:04:13.437550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.244 qpair failed and we were unable to recover it. 00:29:56.244 [2024-04-26 09:04:13.447443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.244 [2024-04-26 09:04:13.447577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.244 [2024-04-26 09:04:13.447596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.244 [2024-04-26 09:04:13.447606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.244 [2024-04-26 09:04:13.447615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.244 [2024-04-26 09:04:13.447634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.244 qpair failed and we were unable to recover it. 00:29:56.244 [2024-04-26 09:04:13.457457] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.244 [2024-04-26 09:04:13.457586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.244 [2024-04-26 09:04:13.457605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.244 [2024-04-26 09:04:13.457616] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.244 [2024-04-26 09:04:13.457624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.244 [2024-04-26 09:04:13.457647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.244 qpair failed and we were unable to recover it. 00:29:56.244 [2024-04-26 09:04:13.467509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.244 [2024-04-26 09:04:13.467641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.244 [2024-04-26 09:04:13.467661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.244 [2024-04-26 09:04:13.467671] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.244 [2024-04-26 09:04:13.467679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.244 [2024-04-26 09:04:13.467699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.244 qpair failed and we were unable to recover it. 00:29:56.244 [2024-04-26 09:04:13.477557] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.244 [2024-04-26 09:04:13.477706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.244 [2024-04-26 09:04:13.477725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.244 [2024-04-26 09:04:13.477735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.244 [2024-04-26 09:04:13.477744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.244 [2024-04-26 09:04:13.477763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.244 qpair failed and we were unable to recover it. 00:29:56.244 [2024-04-26 09:04:13.487547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.244 [2024-04-26 09:04:13.487679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.244 [2024-04-26 09:04:13.487698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.244 [2024-04-26 09:04:13.487708] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.244 [2024-04-26 09:04:13.487717] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.244 [2024-04-26 09:04:13.487736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.244 qpair failed and we were unable to recover it. 00:29:56.502 [2024-04-26 09:04:13.497580] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.502 [2024-04-26 09:04:13.497709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.502 [2024-04-26 09:04:13.497728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.502 [2024-04-26 09:04:13.497738] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.502 [2024-04-26 09:04:13.497747] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.502 [2024-04-26 09:04:13.497767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-04-26 09:04:13.507611] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.502 [2024-04-26 09:04:13.507987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.502 [2024-04-26 09:04:13.508010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.502 [2024-04-26 09:04:13.508020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.502 [2024-04-26 09:04:13.508028] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.502 [2024-04-26 09:04:13.508047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-04-26 09:04:13.517627] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.502 [2024-04-26 09:04:13.517772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.502 [2024-04-26 09:04:13.517791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.502 [2024-04-26 09:04:13.517801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.502 [2024-04-26 09:04:13.517810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.502 [2024-04-26 09:04:13.517829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.502 qpair failed and we were unable to recover it. 00:29:56.502 [2024-04-26 09:04:13.527664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.502 [2024-04-26 09:04:13.527796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.502 [2024-04-26 09:04:13.527815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.502 [2024-04-26 09:04:13.527825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.527833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.527853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.537711] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.537839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.537858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.537868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.537877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.537896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.547671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.547804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.547823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.547833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.547845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.547864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.557754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.557888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.557907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.557917] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.557926] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.557945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.567783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.567919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.567938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.567948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.567957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.567975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.577765] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.577890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.577909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.577919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.577928] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.577947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.587838] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.587981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.588001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.588011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.588019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.588039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.597847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.597980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.598000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.598010] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.598018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.598038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.607883] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.608058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.608078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.608088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.608097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.608116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.617948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.618081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.618100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.618110] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.618118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.618137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.627867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.627996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.628015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.628025] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.628034] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.628054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.637993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.638123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.638143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.638156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.638165] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.638184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.647940] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.648068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.648088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.648098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.648107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.503 [2024-04-26 09:04:13.648126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.503 qpair failed and we were unable to recover it. 00:29:56.503 [2024-04-26 09:04:13.657986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.503 [2024-04-26 09:04:13.658159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.503 [2024-04-26 09:04:13.658179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.503 [2024-04-26 09:04:13.658189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.503 [2024-04-26 09:04:13.658197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.504 [2024-04-26 09:04:13.658217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-04-26 09:04:13.668037] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.504 [2024-04-26 09:04:13.668168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.504 [2024-04-26 09:04:13.668187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.504 [2024-04-26 09:04:13.668197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.504 [2024-04-26 09:04:13.668206] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.504 [2024-04-26 09:04:13.668225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-04-26 09:04:13.678066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.504 [2024-04-26 09:04:13.678195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.504 [2024-04-26 09:04:13.678214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.504 [2024-04-26 09:04:13.678224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.504 [2024-04-26 09:04:13.678233] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.504 [2024-04-26 09:04:13.678253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-04-26 09:04:13.688124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.504 [2024-04-26 09:04:13.688255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.504 [2024-04-26 09:04:13.688275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.504 [2024-04-26 09:04:13.688285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.504 [2024-04-26 09:04:13.688294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.504 [2024-04-26 09:04:13.688313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-04-26 09:04:13.698355] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.504 [2024-04-26 09:04:13.698488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.504 [2024-04-26 09:04:13.698508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.504 [2024-04-26 09:04:13.698518] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.504 [2024-04-26 09:04:13.698526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.504 [2024-04-26 09:04:13.698546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-04-26 09:04:13.708106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.504 [2024-04-26 09:04:13.708236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.504 [2024-04-26 09:04:13.708256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.504 [2024-04-26 09:04:13.708265] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.504 [2024-04-26 09:04:13.708274] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.504 [2024-04-26 09:04:13.708293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-04-26 09:04:13.718220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.504 [2024-04-26 09:04:13.718370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.504 [2024-04-26 09:04:13.718390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.504 [2024-04-26 09:04:13.718399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.504 [2024-04-26 09:04:13.718408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.504 [2024-04-26 09:04:13.718427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-04-26 09:04:13.728195] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.504 [2024-04-26 09:04:13.728332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.504 [2024-04-26 09:04:13.728352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.504 [2024-04-26 09:04:13.728365] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.504 [2024-04-26 09:04:13.728374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.504 [2024-04-26 09:04:13.728393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.504 [2024-04-26 09:04:13.738244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.504 [2024-04-26 09:04:13.738378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.504 [2024-04-26 09:04:13.738397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.504 [2024-04-26 09:04:13.738407] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.504 [2024-04-26 09:04:13.738415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.504 [2024-04-26 09:04:13.738434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.504 qpair failed and we were unable to recover it. 00:29:56.763 [2024-04-26 09:04:13.748301] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.763 [2024-04-26 09:04:13.748433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.763 [2024-04-26 09:04:13.748457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.763 [2024-04-26 09:04:13.748468] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.763 [2024-04-26 09:04:13.748476] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.763 [2024-04-26 09:04:13.748495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.763 qpair failed and we were unable to recover it. 00:29:56.763 [2024-04-26 09:04:13.758281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.763 [2024-04-26 09:04:13.758415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.763 [2024-04-26 09:04:13.758435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.763 [2024-04-26 09:04:13.758445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.763 [2024-04-26 09:04:13.758459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.763 [2024-04-26 09:04:13.758479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.763 qpair failed and we were unable to recover it. 00:29:56.763 [2024-04-26 09:04:13.768294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.763 [2024-04-26 09:04:13.768651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.763 [2024-04-26 09:04:13.768670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.763 [2024-04-26 09:04:13.768680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.763 [2024-04-26 09:04:13.768688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.763 [2024-04-26 09:04:13.768708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.763 qpair failed and we were unable to recover it. 00:29:56.763 [2024-04-26 09:04:13.778337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.763 [2024-04-26 09:04:13.778471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.763 [2024-04-26 09:04:13.778491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.763 [2024-04-26 09:04:13.778502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.763 [2024-04-26 09:04:13.778510] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.763 [2024-04-26 09:04:13.778530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.763 qpair failed and we were unable to recover it. 00:29:56.763 [2024-04-26 09:04:13.788394] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.763 [2024-04-26 09:04:13.788567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.763 [2024-04-26 09:04:13.788586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.763 [2024-04-26 09:04:13.788596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.763 [2024-04-26 09:04:13.788605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.763 [2024-04-26 09:04:13.788624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.763 qpair failed and we were unable to recover it. 00:29:56.763 [2024-04-26 09:04:13.798583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.763 [2024-04-26 09:04:13.798919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.763 [2024-04-26 09:04:13.798938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.763 [2024-04-26 09:04:13.798947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.763 [2024-04-26 09:04:13.798956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.763 [2024-04-26 09:04:13.798975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.763 qpair failed and we were unable to recover it. 00:29:56.763 [2024-04-26 09:04:13.808421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.763 [2024-04-26 09:04:13.808555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.763 [2024-04-26 09:04:13.808574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.763 [2024-04-26 09:04:13.808584] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.763 [2024-04-26 09:04:13.808593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.763 [2024-04-26 09:04:13.808613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.763 qpair failed and we were unable to recover it. 00:29:56.763 [2024-04-26 09:04:13.818464] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.763 [2024-04-26 09:04:13.818596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.763 [2024-04-26 09:04:13.818618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.763 [2024-04-26 09:04:13.818628] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.763 [2024-04-26 09:04:13.818637] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.818656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.828482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.828615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.828634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.828644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.828653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.828672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.838543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.838692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.838712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.838722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.838731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.838750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.848559] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.848689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.848708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.848717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.848726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.848746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.858610] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.858740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.858758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.858768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.858777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.858800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.868626] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.868759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.868778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.868788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.868797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.868816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.878687] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.878836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.878854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.878864] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.878872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.878892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.888600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.888775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.888794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.888804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.888812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.888832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.898637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.898766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.898784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.898793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.898802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.898822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.908712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.908844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.908866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.908876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.908884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.908903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.918747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.918883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.918901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.918911] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.918920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.918939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.928769] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.928897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.928915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.928925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.928934] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.764 [2024-04-26 09:04:13.928953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.764 qpair failed and we were unable to recover it. 00:29:56.764 [2024-04-26 09:04:13.938811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.764 [2024-04-26 09:04:13.938944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.764 [2024-04-26 09:04:13.938963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.764 [2024-04-26 09:04:13.938973] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.764 [2024-04-26 09:04:13.938981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.765 [2024-04-26 09:04:13.939001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.765 qpair failed and we were unable to recover it. 00:29:56.765 [2024-04-26 09:04:13.948853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.765 [2024-04-26 09:04:13.948990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.765 [2024-04-26 09:04:13.949009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.765 [2024-04-26 09:04:13.949019] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.765 [2024-04-26 09:04:13.949031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.765 [2024-04-26 09:04:13.949051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.765 qpair failed and we were unable to recover it. 00:29:56.765 [2024-04-26 09:04:13.958857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.765 [2024-04-26 09:04:13.958987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.765 [2024-04-26 09:04:13.959006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.765 [2024-04-26 09:04:13.959016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.765 [2024-04-26 09:04:13.959025] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.765 [2024-04-26 09:04:13.959045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.765 qpair failed and we were unable to recover it. 00:29:56.765 [2024-04-26 09:04:13.968896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.765 [2024-04-26 09:04:13.969028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.765 [2024-04-26 09:04:13.969048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.765 [2024-04-26 09:04:13.969058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.765 [2024-04-26 09:04:13.969066] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.765 [2024-04-26 09:04:13.969085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.765 qpair failed and we were unable to recover it. 00:29:56.765 [2024-04-26 09:04:13.978919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.765 [2024-04-26 09:04:13.979055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.765 [2024-04-26 09:04:13.979074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.765 [2024-04-26 09:04:13.979084] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.765 [2024-04-26 09:04:13.979092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.765 [2024-04-26 09:04:13.979112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.765 qpair failed and we were unable to recover it. 00:29:56.765 [2024-04-26 09:04:13.988961] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.765 [2024-04-26 09:04:13.989093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.765 [2024-04-26 09:04:13.989111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.765 [2024-04-26 09:04:13.989121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.765 [2024-04-26 09:04:13.989130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.765 [2024-04-26 09:04:13.989149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.765 qpair failed and we were unable to recover it. 00:29:56.765 [2024-04-26 09:04:13.998995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.765 [2024-04-26 09:04:13.999134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.765 [2024-04-26 09:04:13.999153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.765 [2024-04-26 09:04:13.999163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.765 [2024-04-26 09:04:13.999172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:56.765 [2024-04-26 09:04:13.999191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.765 qpair failed and we were unable to recover it. 00:29:56.765 [2024-04-26 09:04:14.009016] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.765 [2024-04-26 09:04:14.009165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.024 [2024-04-26 09:04:14.009183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.024 [2024-04-26 09:04:14.009193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.024 [2024-04-26 09:04:14.009202] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.024 [2024-04-26 09:04:14.009221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-04-26 09:04:14.018966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.024 [2024-04-26 09:04:14.019138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.024 [2024-04-26 09:04:14.019157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.024 [2024-04-26 09:04:14.019167] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.024 [2024-04-26 09:04:14.019176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.024 [2024-04-26 09:04:14.019196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-04-26 09:04:14.029072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.024 [2024-04-26 09:04:14.029205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.024 [2024-04-26 09:04:14.029223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.024 [2024-04-26 09:04:14.029233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.024 [2024-04-26 09:04:14.029242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.024 [2024-04-26 09:04:14.029261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-04-26 09:04:14.039032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.024 [2024-04-26 09:04:14.039165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.024 [2024-04-26 09:04:14.039183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.024 [2024-04-26 09:04:14.039193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.024 [2024-04-26 09:04:14.039205] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.024 [2024-04-26 09:04:14.039225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-04-26 09:04:14.049122] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.024 [2024-04-26 09:04:14.049257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.024 [2024-04-26 09:04:14.049275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.024 [2024-04-26 09:04:14.049285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.024 [2024-04-26 09:04:14.049294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.024 [2024-04-26 09:04:14.049313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-04-26 09:04:14.059162] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.024 [2024-04-26 09:04:14.059313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.024 [2024-04-26 09:04:14.059331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.024 [2024-04-26 09:04:14.059341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.024 [2024-04-26 09:04:14.059350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.024 [2024-04-26 09:04:14.059370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-04-26 09:04:14.069197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.024 [2024-04-26 09:04:14.069332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.024 [2024-04-26 09:04:14.069351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.024 [2024-04-26 09:04:14.069361] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.024 [2024-04-26 09:04:14.069370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.024 [2024-04-26 09:04:14.069389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-04-26 09:04:14.079264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.024 [2024-04-26 09:04:14.079437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.024 [2024-04-26 09:04:14.079460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.024 [2024-04-26 09:04:14.079471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.024 [2024-04-26 09:04:14.079480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.024 [2024-04-26 09:04:14.079500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-04-26 09:04:14.089262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.024 [2024-04-26 09:04:14.089408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.024 [2024-04-26 09:04:14.089427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.024 [2024-04-26 09:04:14.089437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.024 [2024-04-26 09:04:14.089445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.024 [2024-04-26 09:04:14.089469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-04-26 09:04:14.099284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.024 [2024-04-26 09:04:14.099418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.024 [2024-04-26 09:04:14.099437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.024 [2024-04-26 09:04:14.099447] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.024 [2024-04-26 09:04:14.099461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.024 [2024-04-26 09:04:14.099480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.024 qpair failed and we were unable to recover it. 00:29:57.024 [2024-04-26 09:04:14.109307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.024 [2024-04-26 09:04:14.109437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.024 [2024-04-26 09:04:14.109463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.024 [2024-04-26 09:04:14.109473] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.024 [2024-04-26 09:04:14.109482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.109502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.119339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.119479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.119498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.119508] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.119517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.119537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.129387] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.129556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.129575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.129588] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.129597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.129617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.139398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.139567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.139586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.139596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.139606] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.139625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.149420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.149582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.149600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.149610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.149619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.149639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.159472] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.159606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.159625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.159635] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.159643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.159663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.169481] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.169615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.169634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.169644] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.169653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.169672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.179503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.179648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.179667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.179677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.179686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.179705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.189538] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.189675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.189693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.189704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.189713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.189732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.199554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.199696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.199715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.199725] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.199734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.199754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.209521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.209652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.209671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.209681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.209690] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.209709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.219651] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.219800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.219822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.219832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.219840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.219860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.229583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.229724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.229743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.229753] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.229762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.229781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.239670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.025 [2024-04-26 09:04:14.239808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.025 [2024-04-26 09:04:14.239826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.025 [2024-04-26 09:04:14.239836] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.025 [2024-04-26 09:04:14.239845] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.025 [2024-04-26 09:04:14.239864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.025 qpair failed and we were unable to recover it. 00:29:57.025 [2024-04-26 09:04:14.249731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.026 [2024-04-26 09:04:14.249891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.026 [2024-04-26 09:04:14.249912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.026 [2024-04-26 09:04:14.249922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.026 [2024-04-26 09:04:14.249931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.026 [2024-04-26 09:04:14.249951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-04-26 09:04:14.259748] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.026 [2024-04-26 09:04:14.259920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.026 [2024-04-26 09:04:14.259938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.026 [2024-04-26 09:04:14.259948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.026 [2024-04-26 09:04:14.259957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.026 [2024-04-26 09:04:14.259980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.026 [2024-04-26 09:04:14.269775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.026 [2024-04-26 09:04:14.269915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.026 [2024-04-26 09:04:14.269934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.026 [2024-04-26 09:04:14.269944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.026 [2024-04-26 09:04:14.269953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.026 [2024-04-26 09:04:14.269971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.026 qpair failed and we were unable to recover it. 00:29:57.285 [2024-04-26 09:04:14.279793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.285 [2024-04-26 09:04:14.279928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.285 [2024-04-26 09:04:14.279946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.285 [2024-04-26 09:04:14.279956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.285 [2024-04-26 09:04:14.279965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.285 [2024-04-26 09:04:14.279984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-04-26 09:04:14.289783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.285 [2024-04-26 09:04:14.289918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.285 [2024-04-26 09:04:14.289937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.285 [2024-04-26 09:04:14.289947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.285 [2024-04-26 09:04:14.289956] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.285 [2024-04-26 09:04:14.289975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.285 qpair failed and we were unable to recover it. 00:29:57.285 [2024-04-26 09:04:14.299835] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.285 [2024-04-26 09:04:14.299972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.299992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.300003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.286 [2024-04-26 09:04:14.300011] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.286 [2024-04-26 09:04:14.300031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.286 qpair failed and we were unable to recover it. 00:29:57.286 [2024-04-26 09:04:14.309807] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.286 [2024-04-26 09:04:14.309936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.309961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.309971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.286 [2024-04-26 09:04:14.309980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.286 [2024-04-26 09:04:14.309999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.286 qpair failed and we were unable to recover it. 00:29:57.286 [2024-04-26 09:04:14.319897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.286 [2024-04-26 09:04:14.320030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.320048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.320058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.286 [2024-04-26 09:04:14.320067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.286 [2024-04-26 09:04:14.320086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.286 qpair failed and we were unable to recover it. 00:29:57.286 [2024-04-26 09:04:14.329918] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.286 [2024-04-26 09:04:14.330054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.330073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.330083] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.286 [2024-04-26 09:04:14.330092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.286 [2024-04-26 09:04:14.330111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.286 qpair failed and we were unable to recover it. 00:29:57.286 [2024-04-26 09:04:14.339952] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.286 [2024-04-26 09:04:14.340082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.340101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.340111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.286 [2024-04-26 09:04:14.340120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.286 [2024-04-26 09:04:14.340138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.286 qpair failed and we were unable to recover it. 00:29:57.286 [2024-04-26 09:04:14.349987] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.286 [2024-04-26 09:04:14.350119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.350137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.350147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.286 [2024-04-26 09:04:14.350160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.286 [2024-04-26 09:04:14.350179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.286 qpair failed and we were unable to recover it. 00:29:57.286 [2024-04-26 09:04:14.360012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.286 [2024-04-26 09:04:14.360142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.360161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.360171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.286 [2024-04-26 09:04:14.360180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.286 [2024-04-26 09:04:14.360199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.286 qpair failed and we were unable to recover it. 00:29:57.286 [2024-04-26 09:04:14.370039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.286 [2024-04-26 09:04:14.370189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.370208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.370218] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.286 [2024-04-26 09:04:14.370227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.286 [2024-04-26 09:04:14.370246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.286 qpair failed and we were unable to recover it. 00:29:57.286 [2024-04-26 09:04:14.380061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.286 [2024-04-26 09:04:14.380195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.380213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.380224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.286 [2024-04-26 09:04:14.380232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.286 [2024-04-26 09:04:14.380251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.286 qpair failed and we were unable to recover it. 00:29:57.286 [2024-04-26 09:04:14.390098] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.286 [2024-04-26 09:04:14.390229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.390247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.390257] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.286 [2024-04-26 09:04:14.390266] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.286 [2024-04-26 09:04:14.390286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.286 qpair failed and we were unable to recover it. 00:29:57.286 [2024-04-26 09:04:14.400118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.286 [2024-04-26 09:04:14.400269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.400288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.400299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.286 [2024-04-26 09:04:14.400307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.286 [2024-04-26 09:04:14.400326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.286 qpair failed and we were unable to recover it. 00:29:57.286 [2024-04-26 09:04:14.410137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.286 [2024-04-26 09:04:14.410272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.286 [2024-04-26 09:04:14.410291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.286 [2024-04-26 09:04:14.410301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.287 [2024-04-26 09:04:14.410309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.287 [2024-04-26 09:04:14.410329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.287 qpair failed and we were unable to recover it. 00:29:57.287 [2024-04-26 09:04:14.420172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.287 [2024-04-26 09:04:14.420306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.287 [2024-04-26 09:04:14.420325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.287 [2024-04-26 09:04:14.420335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.287 [2024-04-26 09:04:14.420344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.287 [2024-04-26 09:04:14.420363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.287 qpair failed and we were unable to recover it. 00:29:57.287 [2024-04-26 09:04:14.430214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.287 [2024-04-26 09:04:14.430344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.287 [2024-04-26 09:04:14.430362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.287 [2024-04-26 09:04:14.430372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.287 [2024-04-26 09:04:14.430381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.287 [2024-04-26 09:04:14.430400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.287 qpair failed and we were unable to recover it. 00:29:57.287 [2024-04-26 09:04:14.440215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.287 [2024-04-26 09:04:14.440350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.287 [2024-04-26 09:04:14.440369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.287 [2024-04-26 09:04:14.440380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.287 [2024-04-26 09:04:14.440391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.287 [2024-04-26 09:04:14.440411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.287 qpair failed and we were unable to recover it. 00:29:57.287 [2024-04-26 09:04:14.450270] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.287 [2024-04-26 09:04:14.450403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.287 [2024-04-26 09:04:14.450421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.287 [2024-04-26 09:04:14.450432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.287 [2024-04-26 09:04:14.450441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.287 [2024-04-26 09:04:14.450465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.287 qpair failed and we were unable to recover it. 00:29:57.287 [2024-04-26 09:04:14.460291] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.287 [2024-04-26 09:04:14.460426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.287 [2024-04-26 09:04:14.460445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.287 [2024-04-26 09:04:14.460461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.287 [2024-04-26 09:04:14.460470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.287 [2024-04-26 09:04:14.460490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.287 qpair failed and we were unable to recover it. 00:29:57.287 [2024-04-26 09:04:14.470330] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.287 [2024-04-26 09:04:14.470466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.287 [2024-04-26 09:04:14.470485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.287 [2024-04-26 09:04:14.470496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.287 [2024-04-26 09:04:14.470504] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.287 [2024-04-26 09:04:14.470523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.287 qpair failed and we were unable to recover it. 00:29:57.287 [2024-04-26 09:04:14.480366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.287 [2024-04-26 09:04:14.480522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.287 [2024-04-26 09:04:14.480540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.287 [2024-04-26 09:04:14.480551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.287 [2024-04-26 09:04:14.480559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.287 [2024-04-26 09:04:14.480579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.287 qpair failed and we were unable to recover it. 00:29:57.287 [2024-04-26 09:04:14.490317] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.287 [2024-04-26 09:04:14.490454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.287 [2024-04-26 09:04:14.490473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.287 [2024-04-26 09:04:14.490483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.287 [2024-04-26 09:04:14.490493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.287 [2024-04-26 09:04:14.490512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.287 qpair failed and we were unable to recover it. 00:29:57.287 [2024-04-26 09:04:14.500411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.287 [2024-04-26 09:04:14.500546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.287 [2024-04-26 09:04:14.500564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.287 [2024-04-26 09:04:14.500574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.287 [2024-04-26 09:04:14.500583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.287 [2024-04-26 09:04:14.500602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.287 qpair failed and we were unable to recover it. 00:29:57.287 [2024-04-26 09:04:14.510426] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.287 [2024-04-26 09:04:14.510575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.287 [2024-04-26 09:04:14.510594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.287 [2024-04-26 09:04:14.510604] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.287 [2024-04-26 09:04:14.510613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.287 [2024-04-26 09:04:14.510632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.287 qpair failed and we were unable to recover it. 00:29:57.287 [2024-04-26 09:04:14.520467] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.287 [2024-04-26 09:04:14.520608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.287 [2024-04-26 09:04:14.520627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.288 [2024-04-26 09:04:14.520637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.288 [2024-04-26 09:04:14.520646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.288 [2024-04-26 09:04:14.520665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.288 qpair failed and we were unable to recover it. 00:29:57.288 [2024-04-26 09:04:14.530534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.288 [2024-04-26 09:04:14.530666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.288 [2024-04-26 09:04:14.530684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.288 [2024-04-26 09:04:14.530697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.288 [2024-04-26 09:04:14.530706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.288 [2024-04-26 09:04:14.530726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.288 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.540486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.540664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.540684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.540694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.540703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.540723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.550593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.550725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.550744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.550754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.550762] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.550782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.560585] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.560715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.560733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.560743] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.560752] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.560772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.570648] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.570791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.570810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.570820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.570829] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.570848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.580637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.580767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.580785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.580795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.580804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.580824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.590704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.590848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.590867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.590877] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.590886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.590906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.600690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.600822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.600840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.600850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.600859] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.600878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.610932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.611062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.611081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.611091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.611100] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.611119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.620688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.620828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.620850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.620860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.620869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.620888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.630785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.630915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.630934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.630944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.630953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.630972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.640730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.640872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.640891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.640901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.640910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.640928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.650828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.548 [2024-04-26 09:04:14.650956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.548 [2024-04-26 09:04:14.650975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.548 [2024-04-26 09:04:14.650985] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.548 [2024-04-26 09:04:14.650994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.548 [2024-04-26 09:04:14.651013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.548 qpair failed and we were unable to recover it. 00:29:57.548 [2024-04-26 09:04:14.660862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.660995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.661013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.661023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.661031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.661054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.670866] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.671002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.671021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.671031] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.671040] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.671059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.680926] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.681079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.681098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.681108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.681117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.681136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.690954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.691119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.691137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.691147] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.691156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.691176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.701015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.701141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.701160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.701170] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.701179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.701199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.711024] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.711181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.711202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.711213] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.711221] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.711241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.720951] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.721092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.721111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.721121] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.721130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.721149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.731054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.731210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.731228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.731238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.731246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.731265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.741084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.741212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.741231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.741241] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.741250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.741269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.751125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.751253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.751272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.751282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.751291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.751313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.761143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.761274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.761293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.761303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.761311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.761330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.771165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.771294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.771312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.771322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.771331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.771350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.781219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.781368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.781387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.549 [2024-04-26 09:04:14.781397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.549 [2024-04-26 09:04:14.781405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.549 [2024-04-26 09:04:14.781424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.549 qpair failed and we were unable to recover it. 00:29:57.549 [2024-04-26 09:04:14.791254] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.549 [2024-04-26 09:04:14.791384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.549 [2024-04-26 09:04:14.791403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.550 [2024-04-26 09:04:14.791413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.550 [2024-04-26 09:04:14.791422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.550 [2024-04-26 09:04:14.791441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.550 qpair failed and we were unable to recover it. 00:29:57.809 [2024-04-26 09:04:14.801275] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.809 [2024-04-26 09:04:14.801424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.809 [2024-04-26 09:04:14.801443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.809 [2024-04-26 09:04:14.801458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.809 [2024-04-26 09:04:14.801467] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.809 [2024-04-26 09:04:14.801486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.809 qpair failed and we were unable to recover it. 00:29:57.809 [2024-04-26 09:04:14.811315] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.809 [2024-04-26 09:04:14.811457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.809 [2024-04-26 09:04:14.811476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.809 [2024-04-26 09:04:14.811486] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.809 [2024-04-26 09:04:14.811495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.809 [2024-04-26 09:04:14.811515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.809 qpair failed and we were unable to recover it. 00:29:57.809 [2024-04-26 09:04:14.821334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.809 [2024-04-26 09:04:14.821472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.809 [2024-04-26 09:04:14.821490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.809 [2024-04-26 09:04:14.821500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.809 [2024-04-26 09:04:14.821509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.809 [2024-04-26 09:04:14.821528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.809 qpair failed and we were unable to recover it. 00:29:57.809 [2024-04-26 09:04:14.831371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.809 [2024-04-26 09:04:14.831516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.809 [2024-04-26 09:04:14.831535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.809 [2024-04-26 09:04:14.831545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.809 [2024-04-26 09:04:14.831554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.809 [2024-04-26 09:04:14.831574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.809 qpair failed and we were unable to recover it. 00:29:57.809 [2024-04-26 09:04:14.841391] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.809 [2024-04-26 09:04:14.841555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.809 [2024-04-26 09:04:14.841573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.809 [2024-04-26 09:04:14.841583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.809 [2024-04-26 09:04:14.841595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.809 [2024-04-26 09:04:14.841615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.809 qpair failed and we were unable to recover it. 00:29:57.809 [2024-04-26 09:04:14.851406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.809 [2024-04-26 09:04:14.851545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.809 [2024-04-26 09:04:14.851565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.809 [2024-04-26 09:04:14.851576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.809 [2024-04-26 09:04:14.851585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.809 [2024-04-26 09:04:14.851605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.809 qpair failed and we were unable to recover it. 00:29:57.809 [2024-04-26 09:04:14.861425] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.809 [2024-04-26 09:04:14.861563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.809 [2024-04-26 09:04:14.861582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.809 [2024-04-26 09:04:14.861592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.809 [2024-04-26 09:04:14.861601] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.809 [2024-04-26 09:04:14.861620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.809 qpair failed and we were unable to recover it. 00:29:57.809 [2024-04-26 09:04:14.871496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.809 [2024-04-26 09:04:14.871630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.871649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.871659] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.871668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.871687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.881498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.881632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.881651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.881661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.881670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.881689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.891532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.891701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.891720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.891731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.891739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.891760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.901486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.901651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.901672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.901682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.901691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.901711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.911560] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.911696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.911715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.911725] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.911734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.911753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.921586] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.921756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.921774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.921785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.921794] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.921814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.931619] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.931750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.931769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.931782] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.931791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.931810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.941671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.941807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.941826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.941836] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.941844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.941864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.951700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.951832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.951851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.951861] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.951870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.951889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.961721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.961852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.961871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.961881] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.961890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.961911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.971654] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.971786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.971804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.971814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.971823] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.971842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.981688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.981824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.981843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.981853] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.981861] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.981881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:14.991722] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:14.991896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:14.991915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:14.991925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:14.991933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:14.991953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:15.001741] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:15.001877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:15.001897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:15.001907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:15.001916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:15.001935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:15.011835] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:15.011970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:15.011988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:15.011998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:15.012007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:15.012026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:15.021877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:15.022014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:15.022033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:15.022047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:15.022056] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:15.022075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:15.031910] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:15.032040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:15.032059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.810 [2024-04-26 09:04:15.032069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.810 [2024-04-26 09:04:15.032078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.810 [2024-04-26 09:04:15.032097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.810 qpair failed and we were unable to recover it. 00:29:57.810 [2024-04-26 09:04:15.041942] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.810 [2024-04-26 09:04:15.042106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.810 [2024-04-26 09:04:15.042124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.811 [2024-04-26 09:04:15.042135] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.811 [2024-04-26 09:04:15.042143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.811 [2024-04-26 09:04:15.042163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.811 qpair failed and we were unable to recover it. 00:29:57.811 [2024-04-26 09:04:15.052004] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.811 [2024-04-26 09:04:15.052153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.811 [2024-04-26 09:04:15.052171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.811 [2024-04-26 09:04:15.052180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.811 [2024-04-26 09:04:15.052189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:57.811 [2024-04-26 09:04:15.052209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.811 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.062192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.062355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.062373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.062384] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.062392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.062412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.072128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.072275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.072293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.072304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.072312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.072332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.082098] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.082256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.082275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.082285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.082294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.082313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.092180] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.092313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.092332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.092343] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.092351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.092371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.102048] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.102183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.102201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.102211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.102220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.102239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.112072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.112416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.112440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.112455] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.112464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.112484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.122156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.122288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.122307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.122318] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.122326] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.122346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.132117] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.132283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.132302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.132312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.132320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.132339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.142141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.142272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.142291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.142301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.142310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.142329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.152262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.152394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.152412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.152423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.152432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.152460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.162279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.162414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.162433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.162443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.162459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.162478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.172231] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.172362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.172381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.172392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.172400] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.172419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.069 qpair failed and we were unable to recover it. 00:29:58.069 [2024-04-26 09:04:15.182342] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.069 [2024-04-26 09:04:15.182499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.069 [2024-04-26 09:04:15.182518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.069 [2024-04-26 09:04:15.182528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.069 [2024-04-26 09:04:15.182536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.069 [2024-04-26 09:04:15.182556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.192364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.192499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.192517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.192528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.192537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.192557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.202407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.202757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.202781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.202791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.202800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.202819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.212351] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.212536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.212555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.212565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.212573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.212593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.222474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.222603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.222621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.222631] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.222640] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.222660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.232502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.232633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.232651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.232661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.232670] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.232689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.242519] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.242652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.242671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.242682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.242694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.242714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.252472] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.252637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.252656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.252666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.252675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.252695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.262529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.262678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.262696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.262707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.262715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.262735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.272531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.272665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.272684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.272694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.272703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.272723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.282612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.282741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.282760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.282770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.282779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.282798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.292699] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.292847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.292866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.292876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.292885] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.292904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.302607] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.302740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.302758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.302769] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.302777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.302796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.070 [2024-04-26 09:04:15.312692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.070 [2024-04-26 09:04:15.312857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.070 [2024-04-26 09:04:15.312876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.070 [2024-04-26 09:04:15.312887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.070 [2024-04-26 09:04:15.312896] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.070 [2024-04-26 09:04:15.312915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.070 qpair failed and we were unable to recover it. 00:29:58.328 [2024-04-26 09:04:15.322744] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.328 [2024-04-26 09:04:15.322880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.328 [2024-04-26 09:04:15.322899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.328 [2024-04-26 09:04:15.322910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.328 [2024-04-26 09:04:15.322919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.328 [2024-04-26 09:04:15.322937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.328 qpair failed and we were unable to recover it. 00:29:58.328 [2024-04-26 09:04:15.332792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.328 [2024-04-26 09:04:15.332918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.328 [2024-04-26 09:04:15.332937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.328 [2024-04-26 09:04:15.332950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.328 [2024-04-26 09:04:15.332960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.328 [2024-04-26 09:04:15.332979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.328 qpair failed and we were unable to recover it. 00:29:58.328 [2024-04-26 09:04:15.342720] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.328 [2024-04-26 09:04:15.342849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.328 [2024-04-26 09:04:15.342868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.328 [2024-04-26 09:04:15.342878] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.328 [2024-04-26 09:04:15.342887] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.328 [2024-04-26 09:04:15.342906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.328 qpair failed and we were unable to recover it. 00:29:58.328 [2024-04-26 09:04:15.352752] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.328 [2024-04-26 09:04:15.352885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.328 [2024-04-26 09:04:15.352903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.328 [2024-04-26 09:04:15.352913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.328 [2024-04-26 09:04:15.352922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.328 [2024-04-26 09:04:15.352941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.328 qpair failed and we were unable to recover it. 00:29:58.328 [2024-04-26 09:04:15.362828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.328 [2024-04-26 09:04:15.363042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.328 [2024-04-26 09:04:15.363061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.328 [2024-04-26 09:04:15.363072] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.328 [2024-04-26 09:04:15.363080] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.328 [2024-04-26 09:04:15.363100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.328 qpair failed and we were unable to recover it. 00:29:58.328 [2024-04-26 09:04:15.372862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.328 [2024-04-26 09:04:15.372992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.328 [2024-04-26 09:04:15.373011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.328 [2024-04-26 09:04:15.373020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.328 [2024-04-26 09:04:15.373029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.328 [2024-04-26 09:04:15.373049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.328 qpair failed and we were unable to recover it. 00:29:58.328 [2024-04-26 09:04:15.382853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.328 [2024-04-26 09:04:15.383028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.328 [2024-04-26 09:04:15.383046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.328 [2024-04-26 09:04:15.383056] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.328 [2024-04-26 09:04:15.383065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.328 [2024-04-26 09:04:15.383085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.328 qpair failed and we were unable to recover it. 00:29:58.328 [2024-04-26 09:04:15.392877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.328 [2024-04-26 09:04:15.393261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.328 [2024-04-26 09:04:15.393281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.328 [2024-04-26 09:04:15.393291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.328 [2024-04-26 09:04:15.393300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.328 [2024-04-26 09:04:15.393318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.328 qpair failed and we were unable to recover it. 00:29:58.328 [2024-04-26 09:04:15.402963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.328 [2024-04-26 09:04:15.403100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.328 [2024-04-26 09:04:15.403119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.328 [2024-04-26 09:04:15.403129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.403138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.403157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.412931] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.413060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.413078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.413089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.413097] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.413117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.423093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.423258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.423276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.423290] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.423299] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.423319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.432998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.433134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.433152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.433162] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.433171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.433191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.443020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.443187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.443206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.443216] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.443225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.443245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.453125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.453253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.453272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.453282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.453291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.453311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.463176] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.463340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.463360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.463370] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.463379] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.463399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.473157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.473294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.473312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.473322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.473331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.473351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.483193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.483330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.483349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.483359] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.483368] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.483387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.493241] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.493376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.493395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.493405] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.493413] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.493433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.503173] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.503300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.503319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.503329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.503338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.503357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.513309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.513485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.513507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.513517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.513526] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.513546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.523232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.523368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.523387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.523397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.523406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.523425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.533331] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.329 [2024-04-26 09:04:15.533466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.329 [2024-04-26 09:04:15.533485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.329 [2024-04-26 09:04:15.533495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.329 [2024-04-26 09:04:15.533503] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.329 [2024-04-26 09:04:15.533523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.329 qpair failed and we were unable to recover it. 00:29:58.329 [2024-04-26 09:04:15.543357] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.330 [2024-04-26 09:04:15.543702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.330 [2024-04-26 09:04:15.543722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.330 [2024-04-26 09:04:15.543732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.330 [2024-04-26 09:04:15.543741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.330 [2024-04-26 09:04:15.543760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-04-26 09:04:15.553375] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.330 [2024-04-26 09:04:15.553509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.330 [2024-04-26 09:04:15.553528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.330 [2024-04-26 09:04:15.553538] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.330 [2024-04-26 09:04:15.553546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.330 [2024-04-26 09:04:15.553568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-04-26 09:04:15.563414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.330 [2024-04-26 09:04:15.563550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.330 [2024-04-26 09:04:15.563569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.330 [2024-04-26 09:04:15.563579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.330 [2024-04-26 09:04:15.563587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.330 [2024-04-26 09:04:15.563606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.330 [2024-04-26 09:04:15.573480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.330 [2024-04-26 09:04:15.573614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.330 [2024-04-26 09:04:15.573633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.330 [2024-04-26 09:04:15.573643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.330 [2024-04-26 09:04:15.573652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.330 [2024-04-26 09:04:15.573671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.330 qpair failed and we were unable to recover it. 00:29:58.589 [2024-04-26 09:04:15.583399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.589 [2024-04-26 09:04:15.583563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.589 [2024-04-26 09:04:15.583582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.589 [2024-04-26 09:04:15.583592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.589 [2024-04-26 09:04:15.583600] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.589 [2024-04-26 09:04:15.583620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.589 qpair failed and we were unable to recover it. 00:29:58.589 [2024-04-26 09:04:15.593515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.589 [2024-04-26 09:04:15.593650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.589 [2024-04-26 09:04:15.593668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.589 [2024-04-26 09:04:15.593678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.589 [2024-04-26 09:04:15.593686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.589 [2024-04-26 09:04:15.593706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.589 qpair failed and we were unable to recover it. 00:29:58.589 [2024-04-26 09:04:15.603515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.589 [2024-04-26 09:04:15.603648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.589 [2024-04-26 09:04:15.603670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.589 [2024-04-26 09:04:15.603680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.589 [2024-04-26 09:04:15.603689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.603708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.613543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.613691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.613709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.613719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.613728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.613748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.623570] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.623702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.623720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.623730] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.623739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.623758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.633608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.633739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.633758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.633768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.633777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.633796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.643648] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.643780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.643799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.643809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.643821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.643840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.653971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.654129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.654147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.654157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.654166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.654187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.663696] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.663828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.663846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.663856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.663865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.663884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.673721] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.673868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.673887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.673897] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.673906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.673925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.683743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.683874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.683892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.683903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.683912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.683931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.693812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.693978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.693996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.694006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.694015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.694035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.703799] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.703929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.703948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.703958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.703967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.703986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.713851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.713999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.714017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.714028] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.714037] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.714057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.723855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.723988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.724007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.724018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.724027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.724046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.733857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.590 [2024-04-26 09:04:15.734201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.590 [2024-04-26 09:04:15.734219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.590 [2024-04-26 09:04:15.734230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.590 [2024-04-26 09:04:15.734242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.590 [2024-04-26 09:04:15.734261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.590 qpair failed and we were unable to recover it. 00:29:58.590 [2024-04-26 09:04:15.743914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.591 [2024-04-26 09:04:15.744042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.591 [2024-04-26 09:04:15.744061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.591 [2024-04-26 09:04:15.744071] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.591 [2024-04-26 09:04:15.744080] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.591 [2024-04-26 09:04:15.744100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.591 qpair failed and we were unable to recover it. 00:29:58.591 [2024-04-26 09:04:15.753974] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.591 [2024-04-26 09:04:15.754136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.591 [2024-04-26 09:04:15.754154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.591 [2024-04-26 09:04:15.754164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.591 [2024-04-26 09:04:15.754173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.591 [2024-04-26 09:04:15.754193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.591 qpair failed and we were unable to recover it. 00:29:58.591 [2024-04-26 09:04:15.763967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.591 [2024-04-26 09:04:15.764102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.591 [2024-04-26 09:04:15.764120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.591 [2024-04-26 09:04:15.764131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.591 [2024-04-26 09:04:15.764140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.591 [2024-04-26 09:04:15.764159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.591 qpair failed and we were unable to recover it. 00:29:58.591 [2024-04-26 09:04:15.773997] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.591 [2024-04-26 09:04:15.774126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.591 [2024-04-26 09:04:15.774145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.591 [2024-04-26 09:04:15.774155] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.591 [2024-04-26 09:04:15.774164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.591 [2024-04-26 09:04:15.774184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.591 qpair failed and we were unable to recover it. 00:29:58.591 [2024-04-26 09:04:15.784028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.591 [2024-04-26 09:04:15.784160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.591 [2024-04-26 09:04:15.784179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.591 [2024-04-26 09:04:15.784189] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.591 [2024-04-26 09:04:15.784198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.591 [2024-04-26 09:04:15.784217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.591 qpair failed and we were unable to recover it. 00:29:58.591 [2024-04-26 09:04:15.794086] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.591 [2024-04-26 09:04:15.794232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.591 [2024-04-26 09:04:15.794250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.591 [2024-04-26 09:04:15.794261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.591 [2024-04-26 09:04:15.794270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.591 [2024-04-26 09:04:15.794289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.591 qpair failed and we were unable to recover it. 00:29:58.591 [2024-04-26 09:04:15.804118] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.591 [2024-04-26 09:04:15.804285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.591 [2024-04-26 09:04:15.804304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.591 [2024-04-26 09:04:15.804314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.591 [2024-04-26 09:04:15.804323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.591 [2024-04-26 09:04:15.804343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.591 qpair failed and we were unable to recover it. 00:29:58.591 [2024-04-26 09:04:15.814096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.591 [2024-04-26 09:04:15.814259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.591 [2024-04-26 09:04:15.814278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.591 [2024-04-26 09:04:15.814288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.591 [2024-04-26 09:04:15.814297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.591 [2024-04-26 09:04:15.814317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.591 qpair failed and we were unable to recover it. 00:29:58.591 [2024-04-26 09:04:15.824130] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.591 [2024-04-26 09:04:15.824263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.591 [2024-04-26 09:04:15.824282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.591 [2024-04-26 09:04:15.824297] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.591 [2024-04-26 09:04:15.824305] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.591 [2024-04-26 09:04:15.824325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.591 qpair failed and we were unable to recover it. 00:29:58.591 [2024-04-26 09:04:15.834158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.591 [2024-04-26 09:04:15.834313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.591 [2024-04-26 09:04:15.834331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.591 [2024-04-26 09:04:15.834341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.591 [2024-04-26 09:04:15.834350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.591 [2024-04-26 09:04:15.834369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.591 qpair failed and we were unable to recover it. 00:29:58.851 [2024-04-26 09:04:15.844151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.851 [2024-04-26 09:04:15.844328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.851 [2024-04-26 09:04:15.844349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.851 [2024-04-26 09:04:15.844360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.851 [2024-04-26 09:04:15.844370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.851 [2024-04-26 09:04:15.844389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.851 qpair failed and we were unable to recover it. 00:29:58.851 [2024-04-26 09:04:15.854255] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.851 [2024-04-26 09:04:15.854401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.851 [2024-04-26 09:04:15.854419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.851 [2024-04-26 09:04:15.854430] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.851 [2024-04-26 09:04:15.854439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.851 [2024-04-26 09:04:15.854464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.851 qpair failed and we were unable to recover it. 00:29:58.851 [2024-04-26 09:04:15.864257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.851 [2024-04-26 09:04:15.864391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.851 [2024-04-26 09:04:15.864410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.851 [2024-04-26 09:04:15.864420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.851 [2024-04-26 09:04:15.864429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.851 [2024-04-26 09:04:15.864448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.851 qpair failed and we were unable to recover it. 00:29:58.851 [2024-04-26 09:04:15.874285] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.851 [2024-04-26 09:04:15.874415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.851 [2024-04-26 09:04:15.874434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.851 [2024-04-26 09:04:15.874445] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.851 [2024-04-26 09:04:15.874459] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.851 [2024-04-26 09:04:15.874478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.851 qpair failed and we were unable to recover it. 00:29:58.851 [2024-04-26 09:04:15.884339] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.851 [2024-04-26 09:04:15.884473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.851 [2024-04-26 09:04:15.884492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.851 [2024-04-26 09:04:15.884502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.851 [2024-04-26 09:04:15.884511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.851 [2024-04-26 09:04:15.884530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.851 qpair failed and we were unable to recover it. 00:29:58.851 [2024-04-26 09:04:15.894320] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.851 [2024-04-26 09:04:15.894456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.852 [2024-04-26 09:04:15.894475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.852 [2024-04-26 09:04:15.894485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.852 [2024-04-26 09:04:15.894494] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.852 [2024-04-26 09:04:15.894514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.852 qpair failed and we were unable to recover it. 00:29:58.852 [2024-04-26 09:04:15.904379] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.852 [2024-04-26 09:04:15.904513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.852 [2024-04-26 09:04:15.904531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.852 [2024-04-26 09:04:15.904541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.852 [2024-04-26 09:04:15.904550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.852 [2024-04-26 09:04:15.904570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.852 qpair failed and we were unable to recover it. 00:29:58.852 [2024-04-26 09:04:15.914404] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.852 [2024-04-26 09:04:15.914541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.852 [2024-04-26 09:04:15.914563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.852 [2024-04-26 09:04:15.914573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.852 [2024-04-26 09:04:15.914582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.852 [2024-04-26 09:04:15.914601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.852 qpair failed and we were unable to recover it. 00:29:58.852 [2024-04-26 09:04:15.924421] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.852 [2024-04-26 09:04:15.924557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.852 [2024-04-26 09:04:15.924576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.852 [2024-04-26 09:04:15.924586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.852 [2024-04-26 09:04:15.924595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.852 [2024-04-26 09:04:15.924615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.852 qpair failed and we were unable to recover it. 00:29:58.852 [2024-04-26 09:04:15.934447] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.852 [2024-04-26 09:04:15.934581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.852 [2024-04-26 09:04:15.934600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.852 [2024-04-26 09:04:15.934609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.852 [2024-04-26 09:04:15.934618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.852 [2024-04-26 09:04:15.934637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.852 qpair failed and we were unable to recover it. 00:29:58.852 [2024-04-26 09:04:15.944521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.852 [2024-04-26 09:04:15.944650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.852 [2024-04-26 09:04:15.944669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.852 [2024-04-26 09:04:15.944679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.852 [2024-04-26 09:04:15.944688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.852 [2024-04-26 09:04:15.944707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.852 qpair failed and we were unable to recover it. 00:29:58.852 [2024-04-26 09:04:15.954540] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.852 [2024-04-26 09:04:15.954686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.852 [2024-04-26 09:04:15.954705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.852 [2024-04-26 09:04:15.954715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.852 [2024-04-26 09:04:15.954724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.852 [2024-04-26 09:04:15.954747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.852 qpair failed and we were unable to recover it. 00:29:58.852 [2024-04-26 09:04:15.964547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.852 [2024-04-26 09:04:15.964681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.852 [2024-04-26 09:04:15.964699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.852 [2024-04-26 09:04:15.964710] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.852 [2024-04-26 09:04:15.964718] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.852 [2024-04-26 09:04:15.964738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.852 qpair failed and we were unable to recover it. 00:29:58.852 [2024-04-26 09:04:15.974602] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.852 [2024-04-26 09:04:15.974737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.852 [2024-04-26 09:04:15.974756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.852 [2024-04-26 09:04:15.974766] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.852 [2024-04-26 09:04:15.974775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.852 [2024-04-26 09:04:15.974795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.852 qpair failed and we were unable to recover it. 00:29:58.852 [2024-04-26 09:04:15.984523] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.852 [2024-04-26 09:04:15.984692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.852 [2024-04-26 09:04:15.984711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.852 [2024-04-26 09:04:15.984721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.852 [2024-04-26 09:04:15.984730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.852 [2024-04-26 09:04:15.984749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.852 qpair failed and we were unable to recover it. 00:29:58.852 [2024-04-26 09:04:15.994633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.852 [2024-04-26 09:04:15.994768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.852 [2024-04-26 09:04:15.994786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.852 [2024-04-26 09:04:15.994797] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.852 [2024-04-26 09:04:15.994805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.852 [2024-04-26 09:04:15.994824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.852 qpair failed and we were unable to recover it. 00:29:58.852 [2024-04-26 09:04:16.004605] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.853 [2024-04-26 09:04:16.004732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.853 [2024-04-26 09:04:16.004754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.853 [2024-04-26 09:04:16.004764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.853 [2024-04-26 09:04:16.004773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.853 [2024-04-26 09:04:16.004793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.853 qpair failed and we were unable to recover it. 00:29:58.853 [2024-04-26 09:04:16.014709] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.853 [2024-04-26 09:04:16.014866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.853 [2024-04-26 09:04:16.014885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.853 [2024-04-26 09:04:16.014895] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.853 [2024-04-26 09:04:16.014904] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.853 [2024-04-26 09:04:16.014923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.853 qpair failed and we were unable to recover it. 00:29:58.853 [2024-04-26 09:04:16.024715] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.853 [2024-04-26 09:04:16.024846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.853 [2024-04-26 09:04:16.024865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.853 [2024-04-26 09:04:16.024875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.853 [2024-04-26 09:04:16.024884] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.853 [2024-04-26 09:04:16.024903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.853 qpair failed and we were unable to recover it. 00:29:58.853 [2024-04-26 09:04:16.034737] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.853 [2024-04-26 09:04:16.034884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.853 [2024-04-26 09:04:16.034903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.853 [2024-04-26 09:04:16.034913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.853 [2024-04-26 09:04:16.034922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.853 [2024-04-26 09:04:16.034942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.853 qpair failed and we were unable to recover it. 00:29:58.853 [2024-04-26 09:04:16.044758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.853 [2024-04-26 09:04:16.044892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.853 [2024-04-26 09:04:16.044910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.853 [2024-04-26 09:04:16.044921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.853 [2024-04-26 09:04:16.044933] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.853 [2024-04-26 09:04:16.044952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.853 qpair failed and we were unable to recover it. 00:29:58.853 [2024-04-26 09:04:16.054792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.853 [2024-04-26 09:04:16.054943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.853 [2024-04-26 09:04:16.054961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.853 [2024-04-26 09:04:16.054971] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.853 [2024-04-26 09:04:16.054980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.853 [2024-04-26 09:04:16.054999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.853 qpair failed and we were unable to recover it. 00:29:58.853 [2024-04-26 09:04:16.064812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.853 [2024-04-26 09:04:16.064940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.853 [2024-04-26 09:04:16.064958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.853 [2024-04-26 09:04:16.064968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.853 [2024-04-26 09:04:16.064977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.853 [2024-04-26 09:04:16.064996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.853 qpair failed and we were unable to recover it. 00:29:58.853 [2024-04-26 09:04:16.074843] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.853 [2024-04-26 09:04:16.074986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.853 [2024-04-26 09:04:16.075004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.853 [2024-04-26 09:04:16.075014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.853 [2024-04-26 09:04:16.075023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.853 [2024-04-26 09:04:16.075042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.853 qpair failed and we were unable to recover it. 00:29:58.853 [2024-04-26 09:04:16.085090] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.853 [2024-04-26 09:04:16.085272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.853 [2024-04-26 09:04:16.085293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.853 [2024-04-26 09:04:16.085303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.853 [2024-04-26 09:04:16.085312] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.853 [2024-04-26 09:04:16.085333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.853 qpair failed and we were unable to recover it. 00:29:58.853 [2024-04-26 09:04:16.094898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.853 [2024-04-26 09:04:16.095031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.853 [2024-04-26 09:04:16.095050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.853 [2024-04-26 09:04:16.095060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.853 [2024-04-26 09:04:16.095069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:58.853 [2024-04-26 09:04:16.095089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.853 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.104923] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.113 [2024-04-26 09:04:16.105053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.113 [2024-04-26 09:04:16.105072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.113 [2024-04-26 09:04:16.105082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.113 [2024-04-26 09:04:16.105091] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.113 [2024-04-26 09:04:16.105110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.113 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.114957] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.113 [2024-04-26 09:04:16.115085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.113 [2024-04-26 09:04:16.115103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.113 [2024-04-26 09:04:16.115113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.113 [2024-04-26 09:04:16.115122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.113 [2024-04-26 09:04:16.115141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.113 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.124986] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.113 [2024-04-26 09:04:16.125113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.113 [2024-04-26 09:04:16.125132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.113 [2024-04-26 09:04:16.125142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.113 [2024-04-26 09:04:16.125150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.113 [2024-04-26 09:04:16.125170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.113 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.135007] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.113 [2024-04-26 09:04:16.135138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.113 [2024-04-26 09:04:16.135156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.113 [2024-04-26 09:04:16.135166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.113 [2024-04-26 09:04:16.135178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.113 [2024-04-26 09:04:16.135197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.113 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.145033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.113 [2024-04-26 09:04:16.145180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.113 [2024-04-26 09:04:16.145198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.113 [2024-04-26 09:04:16.145209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.113 [2024-04-26 09:04:16.145217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.113 [2024-04-26 09:04:16.145237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.113 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.154994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.113 [2024-04-26 09:04:16.155129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.113 [2024-04-26 09:04:16.155148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.113 [2024-04-26 09:04:16.155158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.113 [2024-04-26 09:04:16.155167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.113 [2024-04-26 09:04:16.155186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.113 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.165076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.113 [2024-04-26 09:04:16.165206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.113 [2024-04-26 09:04:16.165225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.113 [2024-04-26 09:04:16.165235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.113 [2024-04-26 09:04:16.165244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.113 [2024-04-26 09:04:16.165263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.113 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.175157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.113 [2024-04-26 09:04:16.175306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.113 [2024-04-26 09:04:16.175324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.113 [2024-04-26 09:04:16.175334] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.113 [2024-04-26 09:04:16.175343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.113 [2024-04-26 09:04:16.175362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.113 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.185146] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.113 [2024-04-26 09:04:16.185297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.113 [2024-04-26 09:04:16.185315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.113 [2024-04-26 09:04:16.185326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.113 [2024-04-26 09:04:16.185334] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.113 [2024-04-26 09:04:16.185354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.113 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.195218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.113 [2024-04-26 09:04:16.195577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.113 [2024-04-26 09:04:16.195597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.113 [2024-04-26 09:04:16.195607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.113 [2024-04-26 09:04:16.195616] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.113 [2024-04-26 09:04:16.195636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.113 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.205209] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.113 [2024-04-26 09:04:16.205365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.113 [2024-04-26 09:04:16.205384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.113 [2024-04-26 09:04:16.205394] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.113 [2024-04-26 09:04:16.205402] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.113 [2024-04-26 09:04:16.205422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.113 qpair failed and we were unable to recover it. 00:29:59.113 [2024-04-26 09:04:16.215237] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.215370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.215390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.215400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.215409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.215428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.225273] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.225615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.225635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.225648] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.225657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.225676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.235345] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.235500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.235520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.235530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.235539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.235559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.245299] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.245433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.245455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.245466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.245475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.245495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.255361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.255530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.255549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.255558] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.255567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.255587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.265508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.265641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.265660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.265670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.265679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.265698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.275413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.275547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.275566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.275576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.275585] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.275605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.285454] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.285593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.285611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.285621] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.285630] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.285649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.295475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.295610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.295628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.295639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.295648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.295667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.305495] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.305624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.305643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.305653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.305662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.305681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.315535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.315671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.315693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.315703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.315712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.315731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.325555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.325690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.325709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.325719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.325728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.325748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.335558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.335693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.335712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.335722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.335731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.335750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.114 qpair failed and we were unable to recover it. 00:29:59.114 [2024-04-26 09:04:16.345617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.114 [2024-04-26 09:04:16.345764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.114 [2024-04-26 09:04:16.345783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.114 [2024-04-26 09:04:16.345793] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.114 [2024-04-26 09:04:16.345802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.114 [2024-04-26 09:04:16.345822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.115 qpair failed and we were unable to recover it. 00:29:59.115 [2024-04-26 09:04:16.355659] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.115 [2024-04-26 09:04:16.355788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.115 [2024-04-26 09:04:16.355807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.115 [2024-04-26 09:04:16.355817] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.115 [2024-04-26 09:04:16.355826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.115 [2024-04-26 09:04:16.355849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.115 qpair failed and we were unable to recover it. 00:29:59.374 [2024-04-26 09:04:16.365660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-04-26 09:04:16.365795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-04-26 09:04:16.365813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-04-26 09:04:16.365823] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-04-26 09:04:16.365832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.374 [2024-04-26 09:04:16.365851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-04-26 09:04:16.375820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-04-26 09:04:16.375949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-04-26 09:04:16.375967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-04-26 09:04:16.375977] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-04-26 09:04:16.375986] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.374 [2024-04-26 09:04:16.376006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-04-26 09:04:16.385733] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-04-26 09:04:16.385863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-04-26 09:04:16.385882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-04-26 09:04:16.385892] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-04-26 09:04:16.385901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.374 [2024-04-26 09:04:16.385920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-04-26 09:04:16.395751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-04-26 09:04:16.395883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-04-26 09:04:16.395902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-04-26 09:04:16.395912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-04-26 09:04:16.395921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.374 [2024-04-26 09:04:16.395939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-04-26 09:04:16.405728] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.374 [2024-04-26 09:04:16.405860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.374 [2024-04-26 09:04:16.405883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.374 [2024-04-26 09:04:16.405894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.374 [2024-04-26 09:04:16.405903] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.374 [2024-04-26 09:04:16.405922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.374 qpair failed and we were unable to recover it. 00:29:59.374 [2024-04-26 09:04:16.415807] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.416150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.416171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.416181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.416190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.416210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.425845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.425977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.425995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.426005] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.426014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.426033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.435861] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.436016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.436034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.436044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.436053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.436072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.445877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.446032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.446051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.446061] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.446070] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.446095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.455948] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.456079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.456098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.456109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.456117] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.456137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.465935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.466066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.466084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.466095] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.466104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.466122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.475972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.476103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.476122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.476132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.476141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.476161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.485988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.486124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.486142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.486153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.486161] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.486180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.496040] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.496195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.496214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.496224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.496232] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.496252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.506042] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.506172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.506190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.506201] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.506209] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.506229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.516218] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.516351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.516369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.516380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.516389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.516409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.526032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.375 [2024-04-26 09:04:16.526165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.375 [2024-04-26 09:04:16.526184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.375 [2024-04-26 09:04:16.526194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.375 [2024-04-26 09:04:16.526203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.375 [2024-04-26 09:04:16.526222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.375 qpair failed and we were unable to recover it. 00:29:59.375 [2024-04-26 09:04:16.536129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.376 [2024-04-26 09:04:16.536256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.376 [2024-04-26 09:04:16.536275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.376 [2024-04-26 09:04:16.536285] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.376 [2024-04-26 09:04:16.536297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.376 [2024-04-26 09:04:16.536316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.376 qpair failed and we were unable to recover it. 00:29:59.376 [2024-04-26 09:04:16.546138] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.376 [2024-04-26 09:04:16.546269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.376 [2024-04-26 09:04:16.546288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.376 [2024-04-26 09:04:16.546298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.376 [2024-04-26 09:04:16.546307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.376 [2024-04-26 09:04:16.546327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.376 qpair failed and we were unable to recover it. 00:29:59.376 [2024-04-26 09:04:16.556203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.376 [2024-04-26 09:04:16.556338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.376 [2024-04-26 09:04:16.556359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.376 [2024-04-26 09:04:16.556369] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.376 [2024-04-26 09:04:16.556378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.376 [2024-04-26 09:04:16.556398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.376 qpair failed and we were unable to recover it. 00:29:59.376 [2024-04-26 09:04:16.566230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.376 [2024-04-26 09:04:16.566366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.376 [2024-04-26 09:04:16.566385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.376 [2024-04-26 09:04:16.566396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.376 [2024-04-26 09:04:16.566404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.376 [2024-04-26 09:04:16.566424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.376 qpair failed and we were unable to recover it. 00:29:59.376 [2024-04-26 09:04:16.576234] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.376 [2024-04-26 09:04:16.576581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.376 [2024-04-26 09:04:16.576601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.376 [2024-04-26 09:04:16.576611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.376 [2024-04-26 09:04:16.576620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.376 [2024-04-26 09:04:16.576640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.376 qpair failed and we were unable to recover it. 00:29:59.376 [2024-04-26 09:04:16.586282] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.376 [2024-04-26 09:04:16.586460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.376 [2024-04-26 09:04:16.586478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.376 [2024-04-26 09:04:16.586488] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.376 [2024-04-26 09:04:16.586497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.376 [2024-04-26 09:04:16.586517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.376 qpair failed and we were unable to recover it. 00:29:59.376 [2024-04-26 09:04:16.596312] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.376 [2024-04-26 09:04:16.596444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.376 [2024-04-26 09:04:16.596467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.376 [2024-04-26 09:04:16.596478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.376 [2024-04-26 09:04:16.596487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.376 [2024-04-26 09:04:16.596506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.376 qpair failed and we were unable to recover it. 00:29:59.376 [2024-04-26 09:04:16.606313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.376 [2024-04-26 09:04:16.606444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.376 [2024-04-26 09:04:16.606467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.376 [2024-04-26 09:04:16.606478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.376 [2024-04-26 09:04:16.606487] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.376 [2024-04-26 09:04:16.606505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.376 qpair failed and we were unable to recover it. 00:29:59.376 [2024-04-26 09:04:16.616371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.376 [2024-04-26 09:04:16.616524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.376 [2024-04-26 09:04:16.616543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.376 [2024-04-26 09:04:16.616553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.376 [2024-04-26 09:04:16.616562] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.376 [2024-04-26 09:04:16.616581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.376 qpair failed and we were unable to recover it. 00:29:59.636 [2024-04-26 09:04:16.626407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.636 [2024-04-26 09:04:16.626568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.636 [2024-04-26 09:04:16.626587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.636 [2024-04-26 09:04:16.626601] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.636 [2024-04-26 09:04:16.626609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.636 [2024-04-26 09:04:16.626629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.636 qpair failed and we were unable to recover it. 00:29:59.636 [2024-04-26 09:04:16.636433] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.636 [2024-04-26 09:04:16.636569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.636 [2024-04-26 09:04:16.636588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.636 [2024-04-26 09:04:16.636598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.636 [2024-04-26 09:04:16.636607] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.636 [2024-04-26 09:04:16.636626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.636 qpair failed and we were unable to recover it. 00:29:59.636 [2024-04-26 09:04:16.646486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.636 [2024-04-26 09:04:16.646636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.646655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.646665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.646674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.646694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.656480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.656613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.656633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.656642] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.656651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.656670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.666688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.666870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.666889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.666899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.666907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.666928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.676543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.676689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.676708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.676718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.676727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.676746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.686565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.686707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.686725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.686735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.686744] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.686763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.696719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.696860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.696878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.696888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.696898] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.696919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.706576] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.706744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.706761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.706771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.706780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.706800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.716642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.716774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.716793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.716806] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.716814] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.716833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.726676] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.726813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.726831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.726842] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.726850] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.726869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.736742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.736900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.736919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.736929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.736938] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.736957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.746771] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.746903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.746921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.746931] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.746939] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.746958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.756750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.756885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.756904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.756914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.756922] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.756942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.766761] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.766898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.766917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.766928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.637 [2024-04-26 09:04:16.766936] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.637 [2024-04-26 09:04:16.766956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.637 qpair failed and we were unable to recover it. 00:29:59.637 [2024-04-26 09:04:16.776816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.637 [2024-04-26 09:04:16.776989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.637 [2024-04-26 09:04:16.777008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.637 [2024-04-26 09:04:16.777018] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-04-26 09:04:16.777026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.638 [2024-04-26 09:04:16.777046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-04-26 09:04:16.786762] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-04-26 09:04:16.786895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-04-26 09:04:16.786914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-04-26 09:04:16.786924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-04-26 09:04:16.786932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.638 [2024-04-26 09:04:16.786952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-04-26 09:04:16.796874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-04-26 09:04:16.797005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-04-26 09:04:16.797024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-04-26 09:04:16.797034] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-04-26 09:04:16.797042] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.638 [2024-04-26 09:04:16.797062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-04-26 09:04:16.806835] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-04-26 09:04:16.807008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-04-26 09:04:16.807029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-04-26 09:04:16.807040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-04-26 09:04:16.807048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.638 [2024-04-26 09:04:16.807068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-04-26 09:04:16.816921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-04-26 09:04:16.817150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-04-26 09:04:16.817170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-04-26 09:04:16.817181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-04-26 09:04:16.817190] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.638 [2024-04-26 09:04:16.817210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-04-26 09:04:16.826877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-04-26 09:04:16.827007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-04-26 09:04:16.827026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-04-26 09:04:16.827036] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-04-26 09:04:16.827046] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.638 [2024-04-26 09:04:16.827065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-04-26 09:04:16.836966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-04-26 09:04:16.837097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-04-26 09:04:16.837115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-04-26 09:04:16.837125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-04-26 09:04:16.837134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.638 [2024-04-26 09:04:16.837153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-04-26 09:04:16.846978] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-04-26 09:04:16.847110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-04-26 09:04:16.847128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-04-26 09:04:16.847139] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-04-26 09:04:16.847148] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.638 [2024-04-26 09:04:16.847170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-04-26 09:04:16.857068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-04-26 09:04:16.857304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-04-26 09:04:16.857324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-04-26 09:04:16.857335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-04-26 09:04:16.857344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.638 [2024-04-26 09:04:16.857364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-04-26 09:04:16.867066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-04-26 09:04:16.867197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-04-26 09:04:16.867216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-04-26 09:04:16.867226] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-04-26 09:04:16.867235] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.638 [2024-04-26 09:04:16.867255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.638 [2024-04-26 09:04:16.877017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.638 [2024-04-26 09:04:16.877367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.638 [2024-04-26 09:04:16.877387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.638 [2024-04-26 09:04:16.877397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.638 [2024-04-26 09:04:16.877406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.638 [2024-04-26 09:04:16.877425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.638 qpair failed and we were unable to recover it. 00:29:59.899 [2024-04-26 09:04:16.887121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.899 [2024-04-26 09:04:16.887298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.899 [2024-04-26 09:04:16.887316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.899 [2024-04-26 09:04:16.887326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.899 [2024-04-26 09:04:16.887335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.899 [2024-04-26 09:04:16.887354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.899 qpair failed and we were unable to recover it. 00:29:59.899 [2024-04-26 09:04:16.897144] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.899 [2024-04-26 09:04:16.897276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.899 [2024-04-26 09:04:16.897298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.899 [2024-04-26 09:04:16.897308] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.899 [2024-04-26 09:04:16.897316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.899 [2024-04-26 09:04:16.897336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.899 qpair failed and we were unable to recover it. 00:29:59.899 [2024-04-26 09:04:16.907164] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.899 [2024-04-26 09:04:16.907295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.899 [2024-04-26 09:04:16.907316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.899 [2024-04-26 09:04:16.907326] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.899 [2024-04-26 09:04:16.907336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.899 [2024-04-26 09:04:16.907356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.899 qpair failed and we were unable to recover it. 00:29:59.899 [2024-04-26 09:04:16.917226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.899 [2024-04-26 09:04:16.917385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.899 [2024-04-26 09:04:16.917403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.899 [2024-04-26 09:04:16.917413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.899 [2024-04-26 09:04:16.917422] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.899 [2024-04-26 09:04:16.917442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.899 qpair failed and we were unable to recover it. 00:29:59.899 [2024-04-26 09:04:16.927179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.899 [2024-04-26 09:04:16.927315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.899 [2024-04-26 09:04:16.927334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.899 [2024-04-26 09:04:16.927344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.899 [2024-04-26 09:04:16.927352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.899 [2024-04-26 09:04:16.927372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:16.937240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:16.937377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:16.937395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:16.937406] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:16.937417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:16.937437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:16.947234] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:16.947404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:16.947423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:16.947433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:16.947441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:16.947465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:16.957251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:16.957383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:16.957402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:16.957412] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:16.957421] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:16.957441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:16.967328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:16.967671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:16.967691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:16.967701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:16.967710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:16.967730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:16.977362] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:16.977500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:16.977518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:16.977528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:16.977537] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:16.977557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:16.987330] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:16.987467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:16.987486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:16.987496] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:16.987505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:16.987524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:16.997367] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:16.997502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:16.997521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:16.997531] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:16.997540] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:16.997559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:17.007475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:17.007608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:17.007627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:17.007637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:17.007646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:17.007666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:17.017455] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:17.017626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:17.017644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:17.017654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:17.017663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:17.017684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:17.027473] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:17.027635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:17.027654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:17.027667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:17.027676] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:17.027696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:17.037483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:17.037623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:17.037642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:17.037652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:17.037661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:17.037680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:17.047510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:17.047684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:17.047705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:17.047716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:17.047725] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:17.047745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.900 [2024-04-26 09:04:17.057544] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.900 [2024-04-26 09:04:17.057677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.900 [2024-04-26 09:04:17.057696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.900 [2024-04-26 09:04:17.057706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.900 [2024-04-26 09:04:17.057715] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.900 [2024-04-26 09:04:17.057734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.900 qpair failed and we were unable to recover it. 00:29:59.901 [2024-04-26 09:04:17.067726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.901 [2024-04-26 09:04:17.067872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.901 [2024-04-26 09:04:17.067890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.901 [2024-04-26 09:04:17.067900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.901 [2024-04-26 09:04:17.067909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.901 [2024-04-26 09:04:17.067929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.901 qpair failed and we were unable to recover it. 00:29:59.901 [2024-04-26 09:04:17.077712] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.901 [2024-04-26 09:04:17.077843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.901 [2024-04-26 09:04:17.077862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.901 [2024-04-26 09:04:17.077872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.901 [2024-04-26 09:04:17.077881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.901 [2024-04-26 09:04:17.077900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.901 qpair failed and we were unable to recover it. 00:29:59.901 [2024-04-26 09:04:17.087705] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.901 [2024-04-26 09:04:17.087839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.901 [2024-04-26 09:04:17.087858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.901 [2024-04-26 09:04:17.087867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.901 [2024-04-26 09:04:17.087876] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.901 [2024-04-26 09:04:17.087896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.901 qpair failed and we were unable to recover it. 00:29:59.901 [2024-04-26 09:04:17.097743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.901 [2024-04-26 09:04:17.097914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.901 [2024-04-26 09:04:17.097932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.901 [2024-04-26 09:04:17.097943] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.901 [2024-04-26 09:04:17.097953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.901 [2024-04-26 09:04:17.097972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.901 qpair failed and we were unable to recover it. 00:29:59.901 [2024-04-26 09:04:17.107792] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.901 [2024-04-26 09:04:17.107940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.901 [2024-04-26 09:04:17.107958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.901 [2024-04-26 09:04:17.107968] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.901 [2024-04-26 09:04:17.107977] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.901 [2024-04-26 09:04:17.107997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.901 qpair failed and we were unable to recover it. 00:29:59.901 [2024-04-26 09:04:17.117764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.901 [2024-04-26 09:04:17.117896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.901 [2024-04-26 09:04:17.117915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.901 [2024-04-26 09:04:17.117929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.901 [2024-04-26 09:04:17.117938] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.901 [2024-04-26 09:04:17.117957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.901 qpair failed and we were unable to recover it. 00:29:59.901 [2024-04-26 09:04:17.127773] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.901 [2024-04-26 09:04:17.127951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.901 [2024-04-26 09:04:17.127972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.901 [2024-04-26 09:04:17.127982] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.901 [2024-04-26 09:04:17.127991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.901 [2024-04-26 09:04:17.128011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.901 qpair failed and we were unable to recover it. 00:29:59.901 [2024-04-26 09:04:17.137770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.901 [2024-04-26 09:04:17.137954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.901 [2024-04-26 09:04:17.137972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.901 [2024-04-26 09:04:17.137981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.901 [2024-04-26 09:04:17.137991] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:29:59.901 [2024-04-26 09:04:17.138011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:59.901 qpair failed and we were unable to recover it. 00:30:00.161 [2024-04-26 09:04:17.147881] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.161 [2024-04-26 09:04:17.148014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.161 [2024-04-26 09:04:17.148032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.161 [2024-04-26 09:04:17.148042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.161 [2024-04-26 09:04:17.148051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.161 [2024-04-26 09:04:17.148070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.161 qpair failed and we were unable to recover it. 00:30:00.161 [2024-04-26 09:04:17.157873] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.161 [2024-04-26 09:04:17.158015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.158034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.158044] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.158053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.158072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.167899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.162 [2024-04-26 09:04:17.168029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.168048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.168058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.168067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.168086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.177943] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.162 [2024-04-26 09:04:17.178080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.178099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.178109] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.178118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.178137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.187968] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.162 [2024-04-26 09:04:17.188101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.188119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.188129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.188138] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.188157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.198008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.162 [2024-04-26 09:04:17.198142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.198160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.198171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.198179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.198199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.207933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.162 [2024-04-26 09:04:17.208079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.208102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.208113] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.208121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.208141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.218062] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.162 [2024-04-26 09:04:17.218193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.218212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.218222] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.218230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.218250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.228084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.162 [2024-04-26 09:04:17.228216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.228234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.228244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.228253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.228273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.238120] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.162 [2024-04-26 09:04:17.238253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.238272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.238282] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.238291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.238310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.248143] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.162 [2024-04-26 09:04:17.248276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.248294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.248304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.248313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.248336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.258154] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.162 [2024-04-26 09:04:17.258285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.258304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.258314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.258323] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.258342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.268191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.162 [2024-04-26 09:04:17.268333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.162 [2024-04-26 09:04:17.268352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.162 [2024-04-26 09:04:17.268362] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.162 [2024-04-26 09:04:17.268370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.162 [2024-04-26 09:04:17.268390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.162 qpair failed and we were unable to recover it. 00:30:00.162 [2024-04-26 09:04:17.278223] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.278354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.278372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.278382] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.278391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.278410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.288248] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.288383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.288401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.288411] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.288420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.288438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.298272] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.298405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.298427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.298437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.298445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.298470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.308288] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.308640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.308660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.308670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.308680] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.308699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.318327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.318465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.318483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.318493] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.318502] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.318522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.328370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.328508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.328526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.328536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.328545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.328564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.338378] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.338511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.338529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.338539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.338550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.338569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.348431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.348566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.348584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.348594] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.348603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.348623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.358438] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.358573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.358592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.358602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.358610] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.358631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.368467] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.368606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.368626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.368637] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.368646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.368664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.378493] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.378624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.378644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.378655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.378664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.378683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.388528] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.388667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.388686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.388696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.388704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.388723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.163 [2024-04-26 09:04:17.398558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.163 [2024-04-26 09:04:17.398689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.163 [2024-04-26 09:04:17.398708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.163 [2024-04-26 09:04:17.398718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.163 [2024-04-26 09:04:17.398727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.163 [2024-04-26 09:04:17.398746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.163 qpair failed and we were unable to recover it. 00:30:00.423 [2024-04-26 09:04:17.408502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.423 [2024-04-26 09:04:17.408644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.423 [2024-04-26 09:04:17.408662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.423 [2024-04-26 09:04:17.408672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.423 [2024-04-26 09:04:17.408681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.423 [2024-04-26 09:04:17.408700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.423 qpair failed and we were unable to recover it. 00:30:00.423 [2024-04-26 09:04:17.418603] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.423 [2024-04-26 09:04:17.418735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.423 [2024-04-26 09:04:17.418754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.423 [2024-04-26 09:04:17.418764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.423 [2024-04-26 09:04:17.418773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.423 [2024-04-26 09:04:17.418792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.423 qpair failed and we were unable to recover it. 00:30:00.423 [2024-04-26 09:04:17.428630] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.423 [2024-04-26 09:04:17.428759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.423 [2024-04-26 09:04:17.428779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.423 [2024-04-26 09:04:17.428789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.423 [2024-04-26 09:04:17.428800] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.423 [2024-04-26 09:04:17.428820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.423 qpair failed and we were unable to recover it. 00:30:00.423 [2024-04-26 09:04:17.438677] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.423 [2024-04-26 09:04:17.438810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.423 [2024-04-26 09:04:17.438829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.423 [2024-04-26 09:04:17.438840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.423 [2024-04-26 09:04:17.438849] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.423 [2024-04-26 09:04:17.438868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.423 qpair failed and we were unable to recover it. 00:30:00.423 [2024-04-26 09:04:17.448717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.423 [2024-04-26 09:04:17.448870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.423 [2024-04-26 09:04:17.448888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.423 [2024-04-26 09:04:17.448899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.423 [2024-04-26 09:04:17.448907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.423 [2024-04-26 09:04:17.448926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.423 qpair failed and we were unable to recover it. 00:30:00.423 [2024-04-26 09:04:17.458652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.423 [2024-04-26 09:04:17.458792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.423 [2024-04-26 09:04:17.458810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.423 [2024-04-26 09:04:17.458820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.423 [2024-04-26 09:04:17.458829] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.424 [2024-04-26 09:04:17.458848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.424 qpair failed and we were unable to recover it. 00:30:00.424 [2024-04-26 09:04:17.468735] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.424 [2024-04-26 09:04:17.468878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.424 [2024-04-26 09:04:17.468896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.424 [2024-04-26 09:04:17.468906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.424 [2024-04-26 09:04:17.468915] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.424 [2024-04-26 09:04:17.468934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.424 qpair failed and we were unable to recover it. 00:30:00.424 [2024-04-26 09:04:17.478704] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.424 [2024-04-26 09:04:17.478843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.424 [2024-04-26 09:04:17.478861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.424 [2024-04-26 09:04:17.478871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.424 [2024-04-26 09:04:17.478880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.424 [2024-04-26 09:04:17.478899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.424 qpair failed and we were unable to recover it. 00:30:00.424 [2024-04-26 09:04:17.488786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.424 [2024-04-26 09:04:17.488943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.424 [2024-04-26 09:04:17.488962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.424 [2024-04-26 09:04:17.488972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.424 [2024-04-26 09:04:17.488981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.424 [2024-04-26 09:04:17.489000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.424 qpair failed and we were unable to recover it. 00:30:00.424 [2024-04-26 09:04:17.498752] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.424 [2024-04-26 09:04:17.498892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.424 [2024-04-26 09:04:17.498911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.424 [2024-04-26 09:04:17.498921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.424 [2024-04-26 09:04:17.498929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.424 [2024-04-26 09:04:17.498949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.424 qpair failed and we were unable to recover it. 00:30:00.424 [2024-04-26 09:04:17.508842] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.424 [2024-04-26 09:04:17.508975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.424 [2024-04-26 09:04:17.508993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.424 [2024-04-26 09:04:17.509003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.424 [2024-04-26 09:04:17.509012] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.424 [2024-04-26 09:04:17.509031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.424 qpair failed and we were unable to recover it. 00:30:00.424 [2024-04-26 09:04:17.518877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.424 [2024-04-26 09:04:17.519007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.424 [2024-04-26 09:04:17.519026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.424 [2024-04-26 09:04:17.519039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.424 [2024-04-26 09:04:17.519048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.424 [2024-04-26 09:04:17.519067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.424 qpair failed and we were unable to recover it. 00:30:00.424 [2024-04-26 09:04:17.528812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.424 [2024-04-26 09:04:17.528946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.424 [2024-04-26 09:04:17.528965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.424 [2024-04-26 09:04:17.528975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.424 [2024-04-26 09:04:17.528984] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.424 [2024-04-26 09:04:17.529003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.424 qpair failed and we were unable to recover it. 00:30:00.424 [2024-04-26 09:04:17.538854] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.424 [2024-04-26 09:04:17.538986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.424 [2024-04-26 09:04:17.539004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.424 [2024-04-26 09:04:17.539014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.424 [2024-04-26 09:04:17.539023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.424 [2024-04-26 09:04:17.539042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.424 qpair failed and we were unable to recover it. 00:30:00.424 [2024-04-26 09:04:17.548933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.424 [2024-04-26 09:04:17.549064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.424 [2024-04-26 09:04:17.549083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.424 [2024-04-26 09:04:17.549094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.424 [2024-04-26 09:04:17.549103] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.424 [2024-04-26 09:04:17.549122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.424 qpair failed and we were unable to recover it. 00:30:00.424 [2024-04-26 09:04:17.558996] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.424 [2024-04-26 09:04:17.559127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.424 [2024-04-26 09:04:17.559147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.424 [2024-04-26 09:04:17.559157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.424 [2024-04-26 09:04:17.559166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.424 [2024-04-26 09:04:17.559185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.424 qpair failed and we were unable to recover it. 00:30:00.424 [2024-04-26 09:04:17.568941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.424 [2024-04-26 09:04:17.569075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-04-26 09:04:17.569093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-04-26 09:04:17.569103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-04-26 09:04:17.569112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.425 [2024-04-26 09:04:17.569131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-04-26 09:04:17.579039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-04-26 09:04:17.579171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-04-26 09:04:17.579189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-04-26 09:04:17.579199] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-04-26 09:04:17.579208] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.425 [2024-04-26 09:04:17.579227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-04-26 09:04:17.589076] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-04-26 09:04:17.589208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-04-26 09:04:17.589228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-04-26 09:04:17.589239] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-04-26 09:04:17.589248] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.425 [2024-04-26 09:04:17.589268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-04-26 09:04:17.599116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-04-26 09:04:17.599248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-04-26 09:04:17.599267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-04-26 09:04:17.599278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-04-26 09:04:17.599287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.425 [2024-04-26 09:04:17.599307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-04-26 09:04:17.609126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-04-26 09:04:17.609254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-04-26 09:04:17.609275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-04-26 09:04:17.609286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-04-26 09:04:17.609296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.425 [2024-04-26 09:04:17.609315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-04-26 09:04:17.619202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-04-26 09:04:17.619335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-04-26 09:04:17.619356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-04-26 09:04:17.619367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-04-26 09:04:17.619376] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.425 [2024-04-26 09:04:17.619396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-04-26 09:04:17.629194] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-04-26 09:04:17.629324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-04-26 09:04:17.629343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-04-26 09:04:17.629353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-04-26 09:04:17.629362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.425 [2024-04-26 09:04:17.629381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-04-26 09:04:17.639217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-04-26 09:04:17.639349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-04-26 09:04:17.639367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-04-26 09:04:17.639377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-04-26 09:04:17.639386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.425 [2024-04-26 09:04:17.639405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-04-26 09:04:17.649216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-04-26 09:04:17.649345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-04-26 09:04:17.649364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-04-26 09:04:17.649374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-04-26 09:04:17.649383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.425 [2024-04-26 09:04:17.649406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.425 [2024-04-26 09:04:17.659307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.425 [2024-04-26 09:04:17.659437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.425 [2024-04-26 09:04:17.659461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.425 [2024-04-26 09:04:17.659471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.425 [2024-04-26 09:04:17.659480] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.425 [2024-04-26 09:04:17.659500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.425 qpair failed and we were unable to recover it. 00:30:00.686 [2024-04-26 09:04:17.669283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.686 [2024-04-26 09:04:17.669422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.686 [2024-04-26 09:04:17.669441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.686 [2024-04-26 09:04:17.669456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.686 [2024-04-26 09:04:17.669465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.686 [2024-04-26 09:04:17.669485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.686 qpair failed and we were unable to recover it. 00:30:00.686 [2024-04-26 09:04:17.679335] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.686 [2024-04-26 09:04:17.679508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.686 [2024-04-26 09:04:17.679527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.686 [2024-04-26 09:04:17.679537] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.686 [2024-04-26 09:04:17.679546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.686 [2024-04-26 09:04:17.679566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.686 qpair failed and we were unable to recover it. 00:30:00.686 [2024-04-26 09:04:17.689346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.686 [2024-04-26 09:04:17.689482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.686 [2024-04-26 09:04:17.689501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.686 [2024-04-26 09:04:17.689511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.686 [2024-04-26 09:04:17.689520] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.686 [2024-04-26 09:04:17.689538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.686 qpair failed and we were unable to recover it. 00:30:00.686 [2024-04-26 09:04:17.699379] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.686 [2024-04-26 09:04:17.699514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.686 [2024-04-26 09:04:17.699535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.686 [2024-04-26 09:04:17.699546] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.686 [2024-04-26 09:04:17.699554] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.686 [2024-04-26 09:04:17.699574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.686 qpair failed and we were unable to recover it. 00:30:00.686 [2024-04-26 09:04:17.709371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.686 [2024-04-26 09:04:17.709508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.686 [2024-04-26 09:04:17.709526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.686 [2024-04-26 09:04:17.709536] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.686 [2024-04-26 09:04:17.709545] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.686 [2024-04-26 09:04:17.709564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.686 qpair failed and we were unable to recover it. 00:30:00.686 [2024-04-26 09:04:17.719494] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.686 [2024-04-26 09:04:17.719634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.686 [2024-04-26 09:04:17.719652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.686 [2024-04-26 09:04:17.719662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.686 [2024-04-26 09:04:17.719671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.686 [2024-04-26 09:04:17.719690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.686 qpair failed and we were unable to recover it. 00:30:00.686 [2024-04-26 09:04:17.729480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.686 [2024-04-26 09:04:17.729641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.686 [2024-04-26 09:04:17.729660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.686 [2024-04-26 09:04:17.729670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.686 [2024-04-26 09:04:17.729679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.686 [2024-04-26 09:04:17.729698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.686 qpair failed and we were unable to recover it. 00:30:00.686 [2024-04-26 09:04:17.739492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.686 [2024-04-26 09:04:17.739653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.686 [2024-04-26 09:04:17.739671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.686 [2024-04-26 09:04:17.739682] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.686 [2024-04-26 09:04:17.739694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.686 [2024-04-26 09:04:17.739714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.686 qpair failed and we were unable to recover it. 00:30:00.686 [2024-04-26 09:04:17.749420] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.686 [2024-04-26 09:04:17.749558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.686 [2024-04-26 09:04:17.749577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.686 [2024-04-26 09:04:17.749587] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.749596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.749615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.759555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.759718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.759736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.759746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.759755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.759775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.769485] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.769627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.769645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.769655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.769664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.769683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.779588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.779722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.779741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.779750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.779759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.779779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.789631] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.789980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.790000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.790010] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.790018] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.790037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.799660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.799794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.799812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.799822] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.799831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.799850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.809672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.809807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.809825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.809835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.809844] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.809863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.819727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.819880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.819899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.819909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.819918] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.819937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.829731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.829862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.829880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.829890] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.829902] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.829921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.839777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.839912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.839930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.839940] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.839949] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.839968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.849777] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.849908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.849927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.849938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.849946] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.849966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.859860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.860030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.860050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.860060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.860069] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.860089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.869833] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.870190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.687 [2024-04-26 09:04:17.870208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.687 [2024-04-26 09:04:17.870218] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.687 [2024-04-26 09:04:17.870227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.687 [2024-04-26 09:04:17.870246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.687 qpair failed and we were unable to recover it. 00:30:00.687 [2024-04-26 09:04:17.879882] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.687 [2024-04-26 09:04:17.880013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.688 [2024-04-26 09:04:17.880032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.688 [2024-04-26 09:04:17.880042] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.688 [2024-04-26 09:04:17.880050] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.688 [2024-04-26 09:04:17.880070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.688 qpair failed and we were unable to recover it. 00:30:00.688 [2024-04-26 09:04:17.889896] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.688 [2024-04-26 09:04:17.890023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.688 [2024-04-26 09:04:17.890042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.688 [2024-04-26 09:04:17.890052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.688 [2024-04-26 09:04:17.890061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.688 [2024-04-26 09:04:17.890080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.688 qpair failed and we were unable to recover it. 00:30:00.688 [2024-04-26 09:04:17.899930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.688 [2024-04-26 09:04:17.900057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.688 [2024-04-26 09:04:17.900075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.688 [2024-04-26 09:04:17.900085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.688 [2024-04-26 09:04:17.900094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.688 [2024-04-26 09:04:17.900113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.688 qpair failed and we were unable to recover it. 00:30:00.688 [2024-04-26 09:04:17.909978] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.688 [2024-04-26 09:04:17.910159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.688 [2024-04-26 09:04:17.910180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.688 [2024-04-26 09:04:17.910190] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.688 [2024-04-26 09:04:17.910199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.688 [2024-04-26 09:04:17.910218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.688 qpair failed and we were unable to recover it. 00:30:00.688 [2024-04-26 09:04:17.920041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.688 [2024-04-26 09:04:17.920175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.688 [2024-04-26 09:04:17.920194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.688 [2024-04-26 09:04:17.920207] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.688 [2024-04-26 09:04:17.920216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.688 [2024-04-26 09:04:17.920236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.688 qpair failed and we were unable to recover it. 00:30:00.688 [2024-04-26 09:04:17.930032] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.688 [2024-04-26 09:04:17.930190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.688 [2024-04-26 09:04:17.930209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.688 [2024-04-26 09:04:17.930219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.688 [2024-04-26 09:04:17.930228] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.688 [2024-04-26 09:04:17.930247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.688 qpair failed and we were unable to recover it. 00:30:00.948 [2024-04-26 09:04:17.940046] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.948 [2024-04-26 09:04:17.940177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:17.940195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:17.940205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:17.940213] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:17.940233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:17.950091] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:17.950258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:17.950276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:17.950287] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:17.950296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:17.950315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:17.960110] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:17.960240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:17.960258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:17.960269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:17.960277] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:17.960297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:17.970128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:17.970261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:17.970279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:17.970289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:17.970298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:17.970317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:17.980163] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:17.980294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:17.980313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:17.980323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:17.980331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:17.980351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:17.990217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:17.990363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:17.990382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:17.990392] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:17.990401] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:17.990420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:18.000220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:18.000367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:18.000386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:18.000396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:18.000404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:18.000423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:18.010296] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:18.010441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:18.010469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:18.010480] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:18.010489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:18.010508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:18.020297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:18.020455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:18.020474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:18.020484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:18.020493] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:18.020512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:18.030337] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:18.030475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:18.030494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:18.030504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:18.030513] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:18.030532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:18.040336] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:18.040486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:18.040505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:18.040515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:18.040524] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:18.040543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:18.050364] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:18.050512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:18.050530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:18.050540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:18.050549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:18.050572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:18.060399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:18.060546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:18.060565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:18.060575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:18.060584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.949 [2024-04-26 09:04:18.060603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.949 qpair failed and we were unable to recover it. 00:30:00.949 [2024-04-26 09:04:18.070343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.949 [2024-04-26 09:04:18.070479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.949 [2024-04-26 09:04:18.070497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.949 [2024-04-26 09:04:18.070507] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.949 [2024-04-26 09:04:18.070516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.950 [2024-04-26 09:04:18.070536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.950 qpair failed and we were unable to recover it. 00:30:00.950 [2024-04-26 09:04:18.080496] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.950 [2024-04-26 09:04:18.080656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.950 [2024-04-26 09:04:18.080676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.950 [2024-04-26 09:04:18.080687] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.950 [2024-04-26 09:04:18.080696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.950 [2024-04-26 09:04:18.080716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.950 qpair failed and we were unable to recover it. 00:30:00.950 [2024-04-26 09:04:18.090475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.950 [2024-04-26 09:04:18.090607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.950 [2024-04-26 09:04:18.090626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.950 [2024-04-26 09:04:18.090636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.950 [2024-04-26 09:04:18.090645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.950 [2024-04-26 09:04:18.090664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.950 qpair failed and we were unable to recover it. 00:30:00.950 [2024-04-26 09:04:18.100501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.950 [2024-04-26 09:04:18.100634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.950 [2024-04-26 09:04:18.100655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.950 [2024-04-26 09:04:18.100665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.950 [2024-04-26 09:04:18.100675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.950 [2024-04-26 09:04:18.100695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.950 qpair failed and we were unable to recover it. 00:30:00.950 [2024-04-26 09:04:18.110531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.950 [2024-04-26 09:04:18.110664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.950 [2024-04-26 09:04:18.110683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.950 [2024-04-26 09:04:18.110693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.950 [2024-04-26 09:04:18.110701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.950 [2024-04-26 09:04:18.110721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.950 qpair failed and we were unable to recover it. 00:30:00.950 [2024-04-26 09:04:18.120542] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.950 [2024-04-26 09:04:18.120674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.950 [2024-04-26 09:04:18.120693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.950 [2024-04-26 09:04:18.120703] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.950 [2024-04-26 09:04:18.120712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.950 [2024-04-26 09:04:18.120731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.950 qpair failed and we were unable to recover it. 00:30:00.950 [2024-04-26 09:04:18.130608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.950 [2024-04-26 09:04:18.130739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.950 [2024-04-26 09:04:18.130757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.950 [2024-04-26 09:04:18.130767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.950 [2024-04-26 09:04:18.130776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcdb4000b90 00:30:00.950 [2024-04-26 09:04:18.130795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:00.950 qpair failed and we were unable to recover it. 00:30:00.950 [2024-04-26 09:04:18.130941] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:00.950 A controller has encountered a failure and is being reset. 00:30:00.950 Controller properly reset. 00:30:00.950 Initializing NVMe Controllers 00:30:00.950 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:00.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:00.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:00.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:00.950 Initialization complete. Launching workers. 00:30:00.950 Starting thread on core 1 00:30:00.950 Starting thread on core 2 00:30:00.950 Starting thread on core 3 00:30:00.950 Starting thread on core 0 00:30:00.950 09:04:18 -- host/target_disconnect.sh@59 -- # sync 00:30:00.950 00:30:00.950 real 0m11.350s 00:30:00.950 user 0m19.777s 00:30:00.950 sys 0m4.878s 00:30:00.950 09:04:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:00.950 09:04:18 -- common/autotest_common.sh@10 -- # set +x 00:30:00.950 ************************************ 00:30:00.950 END TEST nvmf_target_disconnect_tc2 00:30:00.950 ************************************ 00:30:01.209 09:04:18 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:01.209 09:04:18 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:01.209 09:04:18 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:01.209 09:04:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:01.209 09:04:18 -- nvmf/common.sh@117 -- # sync 00:30:01.209 09:04:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:01.209 09:04:18 -- nvmf/common.sh@120 -- # set +e 00:30:01.209 09:04:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:01.209 09:04:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:01.209 rmmod nvme_tcp 00:30:01.209 rmmod nvme_fabrics 00:30:01.209 rmmod nvme_keyring 00:30:01.209 09:04:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:01.209 09:04:18 -- nvmf/common.sh@124 -- # set -e 00:30:01.209 09:04:18 -- nvmf/common.sh@125 -- # return 0 00:30:01.209 09:04:18 -- nvmf/common.sh@478 -- # '[' -n 2238246 ']' 00:30:01.209 09:04:18 -- nvmf/common.sh@479 -- # killprocess 2238246 00:30:01.209 09:04:18 -- common/autotest_common.sh@936 -- # '[' -z 2238246 ']' 00:30:01.209 09:04:18 -- common/autotest_common.sh@940 -- # kill -0 2238246 00:30:01.209 09:04:18 -- common/autotest_common.sh@941 -- # uname 00:30:01.209 09:04:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:01.209 09:04:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2238246 00:30:01.209 09:04:18 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:30:01.209 09:04:18 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:30:01.209 09:04:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2238246' 00:30:01.209 killing process with pid 2238246 00:30:01.209 09:04:18 -- common/autotest_common.sh@955 -- # kill 2238246 00:30:01.209 09:04:18 -- common/autotest_common.sh@960 -- # wait 2238246 00:30:01.468 09:04:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:01.468 09:04:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:01.468 09:04:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:01.468 09:04:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:01.468 09:04:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:01.468 09:04:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.468 09:04:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:01.468 09:04:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.002 09:04:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:04.002 00:30:04.002 real 0m21.421s 00:30:04.002 user 0m47.525s 00:30:04.002 sys 0m10.797s 00:30:04.002 09:04:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:04.002 09:04:20 -- common/autotest_common.sh@10 -- # set +x 00:30:04.002 ************************************ 00:30:04.002 END TEST nvmf_target_disconnect 00:30:04.002 ************************************ 00:30:04.002 09:04:20 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:30:04.002 09:04:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:04.002 09:04:20 -- common/autotest_common.sh@10 -- # set +x 00:30:04.002 09:04:20 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:30:04.002 00:30:04.002 real 19m30.758s 00:30:04.002 user 38m40.223s 00:30:04.002 sys 7m28.099s 00:30:04.002 09:04:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:04.002 09:04:20 -- common/autotest_common.sh@10 -- # set +x 00:30:04.002 ************************************ 00:30:04.002 END TEST nvmf_tcp 00:30:04.002 ************************************ 00:30:04.002 09:04:20 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:30:04.002 09:04:20 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:04.003 09:04:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:04.003 09:04:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:04.003 09:04:20 -- common/autotest_common.sh@10 -- # set +x 00:30:04.003 ************************************ 00:30:04.003 START TEST spdkcli_nvmf_tcp 00:30:04.003 ************************************ 00:30:04.003 09:04:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:04.003 * Looking for test storage... 00:30:04.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:04.003 09:04:21 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:04.003 09:04:21 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:04.003 09:04:21 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:04.003 09:04:21 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.003 09:04:21 -- nvmf/common.sh@7 -- # uname -s 00:30:04.003 09:04:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.003 09:04:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.003 09:04:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.003 09:04:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.003 09:04:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.003 09:04:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.003 09:04:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.003 09:04:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.003 09:04:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.003 09:04:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.003 09:04:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:04.003 09:04:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:04.003 09:04:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.003 09:04:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.003 09:04:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.003 09:04:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.003 09:04:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.003 09:04:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.003 09:04:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.003 09:04:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.003 09:04:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.003 09:04:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.003 09:04:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.003 09:04:21 -- paths/export.sh@5 -- # export PATH 00:30:04.003 09:04:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.003 09:04:21 -- nvmf/common.sh@47 -- # : 0 00:30:04.003 09:04:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:04.003 09:04:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:04.003 09:04:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.003 09:04:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.003 09:04:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.003 09:04:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:04.003 09:04:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:04.003 09:04:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:04.003 09:04:21 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:04.003 09:04:21 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:04.003 09:04:21 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:04.003 09:04:21 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:04.003 09:04:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:04.003 09:04:21 -- common/autotest_common.sh@10 -- # set +x 00:30:04.003 09:04:21 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:04.003 09:04:21 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2239933 00:30:04.003 09:04:21 -- spdkcli/common.sh@34 -- # waitforlisten 2239933 00:30:04.003 09:04:21 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:04.003 09:04:21 -- common/autotest_common.sh@817 -- # '[' -z 2239933 ']' 00:30:04.003 09:04:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.003 09:04:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:04.003 09:04:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.003 09:04:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:04.003 09:04:21 -- common/autotest_common.sh@10 -- # set +x 00:30:04.003 [2024-04-26 09:04:21.133882] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:30:04.003 [2024-04-26 09:04:21.133938] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2239933 ] 00:30:04.003 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.003 [2024-04-26 09:04:21.203539] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:04.261 [2024-04-26 09:04:21.276835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.261 [2024-04-26 09:04:21.276839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.827 09:04:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:04.827 09:04:21 -- common/autotest_common.sh@850 -- # return 0 00:30:04.827 09:04:21 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:04.827 09:04:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:04.827 09:04:21 -- common/autotest_common.sh@10 -- # set +x 00:30:04.827 09:04:21 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:04.827 09:04:21 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:04.827 09:04:21 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:04.827 09:04:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:04.827 09:04:21 -- common/autotest_common.sh@10 -- # set +x 00:30:04.827 09:04:21 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:04.827 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:04.827 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:04.827 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:04.827 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:04.827 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:04.827 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:04.827 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:04.827 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:04.827 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:04.827 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:04.827 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:04.827 ' 00:30:05.085 [2024-04-26 09:04:22.313826] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:07.648 [2024-04-26 09:04:24.349889] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.582 [2024-04-26 09:04:25.533847] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:10.481 [2024-04-26 09:04:27.700415] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:12.378 [2024-04-26 09:04:29.558143] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:13.753 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:13.753 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:13.753 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:13.753 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:13.753 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:13.753 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:13.753 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:13.753 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:13.753 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:13.753 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:13.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:13.753 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:14.011 09:04:31 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:14.011 09:04:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:14.011 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:30:14.011 09:04:31 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:14.011 09:04:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:14.011 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:30:14.011 09:04:31 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:14.011 09:04:31 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:14.269 09:04:31 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:14.527 09:04:31 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:14.527 09:04:31 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:14.527 09:04:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:14.527 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:30:14.527 09:04:31 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:14.527 09:04:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:14.527 09:04:31 -- common/autotest_common.sh@10 -- # set +x 00:30:14.527 09:04:31 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:14.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:14.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:14.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:14.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:14.527 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:14.527 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:14.527 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:14.527 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:14.527 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:14.527 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:14.527 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:14.527 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:14.527 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:14.527 ' 00:30:19.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:19.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:19.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:19.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:19.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:19.791 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:19.791 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:19.791 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:19.791 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:19.791 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:19.791 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:19.791 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:19.791 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:19.791 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:19.792 09:04:36 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:19.792 09:04:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:19.792 09:04:36 -- common/autotest_common.sh@10 -- # set +x 00:30:19.792 09:04:36 -- spdkcli/nvmf.sh@90 -- # killprocess 2239933 00:30:19.792 09:04:36 -- common/autotest_common.sh@936 -- # '[' -z 2239933 ']' 00:30:19.792 09:04:36 -- common/autotest_common.sh@940 -- # kill -0 2239933 00:30:19.792 09:04:36 -- common/autotest_common.sh@941 -- # uname 00:30:19.792 09:04:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:19.792 09:04:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2239933 00:30:19.792 09:04:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:19.792 09:04:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:19.792 09:04:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2239933' 00:30:19.792 killing process with pid 2239933 00:30:19.792 09:04:36 -- common/autotest_common.sh@955 -- # kill 2239933 00:30:19.792 [2024-04-26 09:04:36.657869] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:19.792 09:04:36 -- common/autotest_common.sh@960 -- # wait 2239933 00:30:19.792 09:04:36 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:19.792 09:04:36 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:19.792 09:04:36 -- spdkcli/common.sh@13 -- # '[' -n 2239933 ']' 00:30:19.792 09:04:36 -- spdkcli/common.sh@14 -- # killprocess 2239933 00:30:19.792 09:04:36 -- common/autotest_common.sh@936 -- # '[' -z 2239933 ']' 00:30:19.792 09:04:36 -- common/autotest_common.sh@940 -- # kill -0 2239933 00:30:19.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2239933) - No such process 00:30:19.792 09:04:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2239933 is not found' 00:30:19.792 Process with pid 2239933 is not found 00:30:19.792 09:04:36 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:19.792 09:04:36 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:19.792 09:04:36 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:19.792 00:30:19.792 real 0m15.929s 00:30:19.792 user 0m32.843s 00:30:19.792 sys 0m0.872s 00:30:19.792 09:04:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:19.792 09:04:36 -- common/autotest_common.sh@10 -- # set +x 00:30:19.792 ************************************ 00:30:19.792 END TEST spdkcli_nvmf_tcp 00:30:19.792 ************************************ 00:30:19.792 09:04:36 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:19.792 09:04:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:19.792 09:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:19.792 09:04:36 -- common/autotest_common.sh@10 -- # set +x 00:30:20.050 ************************************ 00:30:20.050 START TEST nvmf_identify_passthru 00:30:20.050 ************************************ 00:30:20.050 09:04:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:20.050 * Looking for test storage... 00:30:20.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:20.050 09:04:37 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:20.050 09:04:37 -- nvmf/common.sh@7 -- # uname -s 00:30:20.050 09:04:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:20.050 09:04:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:20.050 09:04:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:20.050 09:04:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:20.050 09:04:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:20.050 09:04:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:20.050 09:04:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:20.050 09:04:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:20.050 09:04:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:20.050 09:04:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:20.050 09:04:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:20.050 09:04:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:20.050 09:04:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:20.050 09:04:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:20.051 09:04:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:20.051 09:04:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:20.051 09:04:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.051 09:04:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.051 09:04:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.051 09:04:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.051 09:04:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.051 09:04:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.051 09:04:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.051 09:04:37 -- paths/export.sh@5 -- # export PATH 00:30:20.051 09:04:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.051 09:04:37 -- nvmf/common.sh@47 -- # : 0 00:30:20.051 09:04:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:20.051 09:04:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:20.051 09:04:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:20.051 09:04:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:20.051 09:04:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:20.051 09:04:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:20.051 09:04:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:20.051 09:04:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:20.051 09:04:37 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:20.051 09:04:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:20.051 09:04:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:20.051 09:04:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:20.051 09:04:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.051 09:04:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.051 09:04:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.051 09:04:37 -- paths/export.sh@5 -- # export PATH 00:30:20.051 09:04:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:20.051 09:04:37 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:20.051 09:04:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:20.051 09:04:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:20.051 09:04:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:20.051 09:04:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:20.051 09:04:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:20.051 09:04:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.051 09:04:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:20.051 09:04:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:20.051 09:04:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:20.051 09:04:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:20.051 09:04:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:20.051 09:04:37 -- common/autotest_common.sh@10 -- # set +x 00:30:28.171 09:04:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:28.171 09:04:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:28.171 09:04:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:28.171 09:04:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:28.171 09:04:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:28.171 09:04:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:28.171 09:04:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:28.171 09:04:43 -- nvmf/common.sh@295 -- # net_devs=() 00:30:28.171 09:04:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:28.171 09:04:43 -- nvmf/common.sh@296 -- # e810=() 00:30:28.171 09:04:43 -- nvmf/common.sh@296 -- # local -ga e810 00:30:28.171 09:04:43 -- nvmf/common.sh@297 -- # x722=() 00:30:28.171 09:04:43 -- nvmf/common.sh@297 -- # local -ga x722 00:30:28.171 09:04:43 -- nvmf/common.sh@298 -- # mlx=() 00:30:28.171 09:04:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:28.171 09:04:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.171 09:04:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.171 09:04:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.171 09:04:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.171 09:04:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.171 09:04:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.171 09:04:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.171 09:04:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.171 09:04:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.171 09:04:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.171 09:04:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.171 09:04:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:28.171 09:04:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:28.171 09:04:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:28.171 09:04:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:28.171 09:04:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:28.171 09:04:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:28.171 09:04:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.171 09:04:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:28.171 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:28.171 09:04:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:28.171 09:04:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:28.171 09:04:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.171 09:04:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.171 09:04:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:28.171 09:04:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:28.171 09:04:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:28.171 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:28.171 09:04:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:28.171 09:04:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:28.171 09:04:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.172 09:04:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.172 09:04:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:28.172 09:04:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:28.172 09:04:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:28.172 09:04:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:28.172 09:04:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.172 09:04:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.172 09:04:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:28.172 09:04:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.172 09:04:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:28.172 Found net devices under 0000:af:00.0: cvl_0_0 00:30:28.172 09:04:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.172 09:04:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:28.172 09:04:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.172 09:04:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:28.172 09:04:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.172 09:04:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:28.172 Found net devices under 0000:af:00.1: cvl_0_1 00:30:28.172 09:04:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.172 09:04:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:28.172 09:04:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:28.172 09:04:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:28.172 09:04:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:28.172 09:04:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:28.172 09:04:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.172 09:04:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.172 09:04:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.172 09:04:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:28.172 09:04:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.172 09:04:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.172 09:04:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:28.172 09:04:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.172 09:04:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.172 09:04:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:28.172 09:04:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:28.172 09:04:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.172 09:04:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.172 09:04:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.172 09:04:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.172 09:04:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:28.172 09:04:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.172 09:04:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.172 09:04:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.172 09:04:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:28.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:30:28.172 00:30:28.172 --- 10.0.0.2 ping statistics --- 00:30:28.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.172 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:30:28.172 09:04:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:30:28.172 00:30:28.172 --- 10.0.0.1 ping statistics --- 00:30:28.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.172 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:30:28.172 09:04:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.172 09:04:44 -- nvmf/common.sh@411 -- # return 0 00:30:28.172 09:04:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:28.172 09:04:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.172 09:04:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:28.172 09:04:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:28.172 09:04:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.172 09:04:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:28.172 09:04:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:28.172 09:04:44 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:28.172 09:04:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:28.172 09:04:44 -- common/autotest_common.sh@10 -- # set +x 00:30:28.172 09:04:44 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:28.172 09:04:44 -- common/autotest_common.sh@1510 -- # bdfs=() 00:30:28.172 09:04:44 -- common/autotest_common.sh@1510 -- # local bdfs 00:30:28.172 09:04:44 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:30:28.172 09:04:44 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:30:28.172 09:04:44 -- common/autotest_common.sh@1499 -- # bdfs=() 00:30:28.172 09:04:44 -- common/autotest_common.sh@1499 -- # local bdfs 00:30:28.172 09:04:44 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:28.172 09:04:44 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:28.172 09:04:44 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:30:28.172 09:04:44 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:30:28.172 09:04:44 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:d8:00.0 00:30:28.172 09:04:44 -- common/autotest_common.sh@1513 -- # echo 0000:d8:00.0 00:30:28.172 09:04:44 -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:30:28.172 09:04:44 -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:30:28.172 09:04:44 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:30:28.172 09:04:44 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:28.172 09:04:44 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:28.172 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.372 09:04:49 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:30:32.372 09:04:49 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:30:32.372 09:04:49 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:32.372 09:04:49 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:32.372 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.643 09:04:53 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:37.643 09:04:53 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:37.643 09:04:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:37.643 09:04:53 -- common/autotest_common.sh@10 -- # set +x 00:30:37.643 09:04:54 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:37.643 09:04:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:37.643 09:04:54 -- common/autotest_common.sh@10 -- # set +x 00:30:37.643 09:04:54 -- target/identify_passthru.sh@31 -- # nvmfpid=2247677 00:30:37.643 09:04:54 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:37.643 09:04:54 -- target/identify_passthru.sh@35 -- # waitforlisten 2247677 00:30:37.643 09:04:54 -- common/autotest_common.sh@817 -- # '[' -z 2247677 ']' 00:30:37.643 09:04:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.643 09:04:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:37.643 09:04:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.643 09:04:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:37.643 09:04:54 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:37.643 09:04:54 -- common/autotest_common.sh@10 -- # set +x 00:30:37.643 [2024-04-26 09:04:54.081346] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:30:37.643 [2024-04-26 09:04:54.081398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.643 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.643 [2024-04-26 09:04:54.156912] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:37.643 [2024-04-26 09:04:54.228757] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.643 [2024-04-26 09:04:54.228795] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.643 [2024-04-26 09:04:54.228804] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.643 [2024-04-26 09:04:54.228812] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.643 [2024-04-26 09:04:54.228819] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.643 [2024-04-26 09:04:54.228907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.643 [2024-04-26 09:04:54.229003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:37.643 [2024-04-26 09:04:54.229086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:37.643 [2024-04-26 09:04:54.229088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.643 09:04:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:37.643 09:04:54 -- common/autotest_common.sh@850 -- # return 0 00:30:37.643 09:04:54 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:37.643 09:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:37.643 09:04:54 -- common/autotest_common.sh@10 -- # set +x 00:30:37.643 INFO: Log level set to 20 00:30:37.643 INFO: Requests: 00:30:37.643 { 00:30:37.643 "jsonrpc": "2.0", 00:30:37.643 "method": "nvmf_set_config", 00:30:37.643 "id": 1, 00:30:37.643 "params": { 00:30:37.643 "admin_cmd_passthru": { 00:30:37.643 "identify_ctrlr": true 00:30:37.643 } 00:30:37.643 } 00:30:37.643 } 00:30:37.643 00:30:37.903 INFO: response: 00:30:37.903 { 00:30:37.903 "jsonrpc": "2.0", 00:30:37.903 "id": 1, 00:30:37.903 "result": true 00:30:37.903 } 00:30:37.903 00:30:37.903 09:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:37.903 09:04:54 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:37.903 09:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:37.903 09:04:54 -- common/autotest_common.sh@10 -- # set +x 00:30:37.903 INFO: Setting log level to 20 00:30:37.903 INFO: Setting log level to 20 00:30:37.903 INFO: Log level set to 20 00:30:37.903 INFO: Log level set to 20 00:30:37.903 INFO: Requests: 00:30:37.903 { 00:30:37.903 "jsonrpc": "2.0", 00:30:37.903 "method": "framework_start_init", 00:30:37.903 "id": 1 00:30:37.903 } 00:30:37.903 00:30:37.903 INFO: Requests: 00:30:37.903 { 00:30:37.903 "jsonrpc": "2.0", 00:30:37.903 "method": "framework_start_init", 00:30:37.903 "id": 1 00:30:37.903 } 00:30:37.903 00:30:37.903 [2024-04-26 09:04:54.981971] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:37.903 INFO: response: 00:30:37.903 { 00:30:37.903 "jsonrpc": "2.0", 00:30:37.903 "id": 1, 00:30:37.903 "result": true 00:30:37.903 } 00:30:37.903 00:30:37.903 INFO: response: 00:30:37.903 { 00:30:37.903 "jsonrpc": "2.0", 00:30:37.903 "id": 1, 00:30:37.903 "result": true 00:30:37.903 } 00:30:37.903 00:30:37.903 09:04:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:37.903 09:04:54 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:37.903 09:04:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:37.903 09:04:54 -- common/autotest_common.sh@10 -- # set +x 00:30:37.903 INFO: Setting log level to 40 00:30:37.903 INFO: Setting log level to 40 00:30:37.903 INFO: Setting log level to 40 00:30:37.903 [2024-04-26 09:04:54.995405] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.903 09:04:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:37.903 09:04:55 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:37.903 09:04:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:37.903 09:04:55 -- common/autotest_common.sh@10 -- # set +x 00:30:37.903 09:04:55 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:30:37.903 09:04:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:37.903 09:04:55 -- common/autotest_common.sh@10 -- # set +x 00:30:41.189 Nvme0n1 00:30:41.189 09:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.189 09:04:57 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:41.189 09:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.189 09:04:57 -- common/autotest_common.sh@10 -- # set +x 00:30:41.189 09:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.189 09:04:57 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:41.189 09:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.189 09:04:57 -- common/autotest_common.sh@10 -- # set +x 00:30:41.189 09:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.189 09:04:57 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.189 09:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.189 09:04:57 -- common/autotest_common.sh@10 -- # set +x 00:30:41.189 [2024-04-26 09:04:57.917147] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.189 09:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.189 09:04:57 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:41.189 09:04:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.189 09:04:57 -- common/autotest_common.sh@10 -- # set +x 00:30:41.189 [2024-04-26 09:04:57.924916] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:41.189 [ 00:30:41.189 { 00:30:41.189 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:41.189 "subtype": "Discovery", 00:30:41.189 "listen_addresses": [], 00:30:41.189 "allow_any_host": true, 00:30:41.189 "hosts": [] 00:30:41.189 }, 00:30:41.189 { 00:30:41.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:41.189 "subtype": "NVMe", 00:30:41.189 "listen_addresses": [ 00:30:41.189 { 00:30:41.189 "transport": "TCP", 00:30:41.189 "trtype": "TCP", 00:30:41.189 "adrfam": "IPv4", 00:30:41.189 "traddr": "10.0.0.2", 00:30:41.189 "trsvcid": "4420" 00:30:41.189 } 00:30:41.189 ], 00:30:41.189 "allow_any_host": true, 00:30:41.189 "hosts": [], 00:30:41.189 "serial_number": "SPDK00000000000001", 00:30:41.189 "model_number": "SPDK bdev Controller", 00:30:41.189 "max_namespaces": 1, 00:30:41.189 "min_cntlid": 1, 00:30:41.189 "max_cntlid": 65519, 00:30:41.189 "namespaces": [ 00:30:41.189 { 00:30:41.189 "nsid": 1, 00:30:41.189 "bdev_name": "Nvme0n1", 00:30:41.189 "name": "Nvme0n1", 00:30:41.189 "nguid": "0E59B4F7698742A8B6B3B0E888C3AE7A", 00:30:41.189 "uuid": "0e59b4f7-6987-42a8-b6b3-b0e888c3ae7a" 00:30:41.189 } 00:30:41.189 ] 00:30:41.189 } 00:30:41.189 ] 00:30:41.189 09:04:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.189 09:04:57 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:41.189 09:04:57 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:41.189 09:04:57 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:41.189 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.189 09:04:58 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:30:41.189 09:04:58 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:41.189 09:04:58 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:41.189 09:04:58 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:41.189 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.189 09:04:58 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:41.189 09:04:58 -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:30:41.189 09:04:58 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:41.189 09:04:58 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.189 09:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.189 09:04:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.189 09:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.189 09:04:58 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:41.189 09:04:58 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:41.189 09:04:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:41.189 09:04:58 -- nvmf/common.sh@117 -- # sync 00:30:41.189 09:04:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:41.189 09:04:58 -- nvmf/common.sh@120 -- # set +e 00:30:41.189 09:04:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:41.189 09:04:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:41.189 rmmod nvme_tcp 00:30:41.189 rmmod nvme_fabrics 00:30:41.189 rmmod nvme_keyring 00:30:41.189 09:04:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:41.189 09:04:58 -- nvmf/common.sh@124 -- # set -e 00:30:41.189 09:04:58 -- nvmf/common.sh@125 -- # return 0 00:30:41.189 09:04:58 -- nvmf/common.sh@478 -- # '[' -n 2247677 ']' 00:30:41.189 09:04:58 -- nvmf/common.sh@479 -- # killprocess 2247677 00:30:41.189 09:04:58 -- common/autotest_common.sh@936 -- # '[' -z 2247677 ']' 00:30:41.189 09:04:58 -- common/autotest_common.sh@940 -- # kill -0 2247677 00:30:41.189 09:04:58 -- common/autotest_common.sh@941 -- # uname 00:30:41.189 09:04:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:41.189 09:04:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2247677 00:30:41.189 09:04:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:41.189 09:04:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:41.189 09:04:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2247677' 00:30:41.189 killing process with pid 2247677 00:30:41.189 09:04:58 -- common/autotest_common.sh@955 -- # kill 2247677 00:30:41.189 [2024-04-26 09:04:58.387091] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:41.189 09:04:58 -- common/autotest_common.sh@960 -- # wait 2247677 00:30:43.724 09:05:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:43.724 09:05:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:43.724 09:05:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:43.724 09:05:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:43.724 09:05:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:43.724 09:05:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.724 09:05:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:43.724 09:05:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.663 09:05:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:45.663 00:30:45.663 real 0m25.515s 00:30:45.663 user 0m33.506s 00:30:45.663 sys 0m6.861s 00:30:45.663 09:05:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:45.663 09:05:02 -- common/autotest_common.sh@10 -- # set +x 00:30:45.663 ************************************ 00:30:45.663 END TEST nvmf_identify_passthru 00:30:45.663 ************************************ 00:30:45.663 09:05:02 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:45.663 09:05:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:45.664 09:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:45.664 09:05:02 -- common/autotest_common.sh@10 -- # set +x 00:30:45.664 ************************************ 00:30:45.664 START TEST nvmf_dif 00:30:45.664 ************************************ 00:30:45.664 09:05:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:45.664 * Looking for test storage... 00:30:45.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:45.664 09:05:02 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.664 09:05:02 -- nvmf/common.sh@7 -- # uname -s 00:30:45.664 09:05:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.664 09:05:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.664 09:05:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.664 09:05:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.664 09:05:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.664 09:05:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.664 09:05:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.664 09:05:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.664 09:05:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.664 09:05:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.664 09:05:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:45.664 09:05:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:45.664 09:05:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.664 09:05:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.664 09:05:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.664 09:05:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.664 09:05:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.664 09:05:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.664 09:05:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.664 09:05:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.664 09:05:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.664 09:05:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.664 09:05:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.664 09:05:02 -- paths/export.sh@5 -- # export PATH 00:30:45.664 09:05:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.664 09:05:02 -- nvmf/common.sh@47 -- # : 0 00:30:45.664 09:05:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:45.664 09:05:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:45.664 09:05:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.664 09:05:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.664 09:05:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.664 09:05:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:45.664 09:05:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:45.664 09:05:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:45.664 09:05:02 -- target/dif.sh@15 -- # NULL_META=16 00:30:45.664 09:05:02 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:45.664 09:05:02 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:45.664 09:05:02 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:45.664 09:05:02 -- target/dif.sh@135 -- # nvmftestinit 00:30:45.664 09:05:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:45.664 09:05:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.664 09:05:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:45.664 09:05:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:45.664 09:05:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:45.664 09:05:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.664 09:05:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:45.664 09:05:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.664 09:05:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:30:45.664 09:05:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:30:45.664 09:05:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:30:45.664 09:05:02 -- common/autotest_common.sh@10 -- # set +x 00:30:52.230 09:05:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:52.230 09:05:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:30:52.230 09:05:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:52.230 09:05:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:52.230 09:05:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:52.230 09:05:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:52.230 09:05:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:52.230 09:05:09 -- nvmf/common.sh@295 -- # net_devs=() 00:30:52.230 09:05:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:52.230 09:05:09 -- nvmf/common.sh@296 -- # e810=() 00:30:52.230 09:05:09 -- nvmf/common.sh@296 -- # local -ga e810 00:30:52.230 09:05:09 -- nvmf/common.sh@297 -- # x722=() 00:30:52.230 09:05:09 -- nvmf/common.sh@297 -- # local -ga x722 00:30:52.230 09:05:09 -- nvmf/common.sh@298 -- # mlx=() 00:30:52.230 09:05:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:30:52.230 09:05:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.230 09:05:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.230 09:05:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.230 09:05:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.230 09:05:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.230 09:05:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.230 09:05:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.230 09:05:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.230 09:05:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.230 09:05:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.230 09:05:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.230 09:05:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:52.230 09:05:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:52.230 09:05:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:52.230 09:05:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.230 09:05:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:52.230 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:52.230 09:05:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.230 09:05:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:52.230 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:52.230 09:05:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:52.230 09:05:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.230 09:05:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.230 09:05:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:52.230 09:05:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.230 09:05:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:52.230 Found net devices under 0000:af:00.0: cvl_0_0 00:30:52.230 09:05:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.230 09:05:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.230 09:05:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.230 09:05:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:30:52.230 09:05:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.230 09:05:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:52.230 Found net devices under 0000:af:00.1: cvl_0_1 00:30:52.230 09:05:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.230 09:05:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:30:52.230 09:05:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:30:52.230 09:05:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:30:52.230 09:05:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:30:52.230 09:05:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.230 09:05:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.230 09:05:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.230 09:05:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:52.230 09:05:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:52.230 09:05:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:52.230 09:05:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:52.230 09:05:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:52.230 09:05:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.230 09:05:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:52.230 09:05:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:52.230 09:05:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:52.230 09:05:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:52.489 09:05:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:52.489 09:05:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:52.489 09:05:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:52.489 09:05:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:52.489 09:05:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:52.489 09:05:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:52.489 09:05:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:52.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:30:52.489 00:30:52.489 --- 10.0.0.2 ping statistics --- 00:30:52.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.489 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:52.489 09:05:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:52.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:30:52.489 00:30:52.489 --- 10.0.0.1 ping statistics --- 00:30:52.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.489 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:30:52.489 09:05:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.489 09:05:09 -- nvmf/common.sh@411 -- # return 0 00:30:52.489 09:05:09 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:30:52.489 09:05:09 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:55.766 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:55.766 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:55.766 09:05:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.766 09:05:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:55.766 09:05:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:55.766 09:05:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.766 09:05:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:55.766 09:05:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:55.766 09:05:12 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:55.766 09:05:12 -- target/dif.sh@137 -- # nvmfappstart 00:30:55.767 09:05:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:55.767 09:05:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:55.767 09:05:12 -- common/autotest_common.sh@10 -- # set +x 00:30:55.767 09:05:12 -- nvmf/common.sh@470 -- # nvmfpid=2253548 00:30:55.767 09:05:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:55.767 09:05:12 -- nvmf/common.sh@471 -- # waitforlisten 2253548 00:30:55.767 09:05:12 -- common/autotest_common.sh@817 -- # '[' -z 2253548 ']' 00:30:55.767 09:05:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.767 09:05:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:55.767 09:05:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.767 09:05:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:55.767 09:05:12 -- common/autotest_common.sh@10 -- # set +x 00:30:55.767 [2024-04-26 09:05:12.767148] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:30:55.767 [2024-04-26 09:05:12.767204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.767 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.767 [2024-04-26 09:05:12.843291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.767 [2024-04-26 09:05:12.915162] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.767 [2024-04-26 09:05:12.915198] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.767 [2024-04-26 09:05:12.915207] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.767 [2024-04-26 09:05:12.915216] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.767 [2024-04-26 09:05:12.915224] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.767 [2024-04-26 09:05:12.915249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.339 09:05:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:56.339 09:05:13 -- common/autotest_common.sh@850 -- # return 0 00:30:56.340 09:05:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:56.340 09:05:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:56.340 09:05:13 -- common/autotest_common.sh@10 -- # set +x 00:30:56.601 09:05:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.601 09:05:13 -- target/dif.sh@139 -- # create_transport 00:30:56.601 09:05:13 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:56.601 09:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.601 09:05:13 -- common/autotest_common.sh@10 -- # set +x 00:30:56.601 [2024-04-26 09:05:13.606270] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.601 09:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.601 09:05:13 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:56.601 09:05:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:56.601 09:05:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:56.601 09:05:13 -- common/autotest_common.sh@10 -- # set +x 00:30:56.601 ************************************ 00:30:56.601 START TEST fio_dif_1_default 00:30:56.601 ************************************ 00:30:56.601 09:05:13 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:30:56.601 09:05:13 -- target/dif.sh@86 -- # create_subsystems 0 00:30:56.601 09:05:13 -- target/dif.sh@28 -- # local sub 00:30:56.601 09:05:13 -- target/dif.sh@30 -- # for sub in "$@" 00:30:56.601 09:05:13 -- target/dif.sh@31 -- # create_subsystem 0 00:30:56.601 09:05:13 -- target/dif.sh@18 -- # local sub_id=0 00:30:56.601 09:05:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:56.601 09:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.601 09:05:13 -- common/autotest_common.sh@10 -- # set +x 00:30:56.601 bdev_null0 00:30:56.601 09:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.601 09:05:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:56.601 09:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.601 09:05:13 -- common/autotest_common.sh@10 -- # set +x 00:30:56.601 09:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.601 09:05:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:56.601 09:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.601 09:05:13 -- common/autotest_common.sh@10 -- # set +x 00:30:56.601 09:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.601 09:05:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:56.601 09:05:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:56.601 09:05:13 -- common/autotest_common.sh@10 -- # set +x 00:30:56.601 [2024-04-26 09:05:13.794891] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.601 09:05:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:56.601 09:05:13 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:56.601 09:05:13 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:56.601 09:05:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:56.601 09:05:13 -- nvmf/common.sh@521 -- # config=() 00:30:56.601 09:05:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.601 09:05:13 -- nvmf/common.sh@521 -- # local subsystem config 00:30:56.601 09:05:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:56.601 09:05:13 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.601 09:05:13 -- target/dif.sh@82 -- # gen_fio_conf 00:30:56.601 09:05:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:56.601 { 00:30:56.601 "params": { 00:30:56.601 "name": "Nvme$subsystem", 00:30:56.601 "trtype": "$TEST_TRANSPORT", 00:30:56.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.601 "adrfam": "ipv4", 00:30:56.601 "trsvcid": "$NVMF_PORT", 00:30:56.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.601 "hdgst": ${hdgst:-false}, 00:30:56.601 "ddgst": ${ddgst:-false} 00:30:56.601 }, 00:30:56.601 "method": "bdev_nvme_attach_controller" 00:30:56.601 } 00:30:56.601 EOF 00:30:56.601 )") 00:30:56.601 09:05:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:56.601 09:05:13 -- target/dif.sh@54 -- # local file 00:30:56.601 09:05:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:56.602 09:05:13 -- target/dif.sh@56 -- # cat 00:30:56.602 09:05:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:56.602 09:05:13 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.602 09:05:13 -- common/autotest_common.sh@1327 -- # shift 00:30:56.602 09:05:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:56.602 09:05:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.602 09:05:13 -- nvmf/common.sh@543 -- # cat 00:30:56.602 09:05:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.602 09:05:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:56.602 09:05:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:56.602 09:05:13 -- target/dif.sh@72 -- # (( file <= files )) 00:30:56.602 09:05:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:56.602 09:05:13 -- nvmf/common.sh@545 -- # jq . 00:30:56.602 09:05:13 -- nvmf/common.sh@546 -- # IFS=, 00:30:56.602 09:05:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:56.602 "params": { 00:30:56.602 "name": "Nvme0", 00:30:56.602 "trtype": "tcp", 00:30:56.602 "traddr": "10.0.0.2", 00:30:56.602 "adrfam": "ipv4", 00:30:56.602 "trsvcid": "4420", 00:30:56.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:56.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:56.602 "hdgst": false, 00:30:56.602 "ddgst": false 00:30:56.602 }, 00:30:56.602 "method": "bdev_nvme_attach_controller" 00:30:56.602 }' 00:30:56.602 09:05:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:56.602 09:05:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:56.602 09:05:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.602 09:05:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.602 09:05:13 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:56.602 09:05:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:56.875 09:05:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:56.875 09:05:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:56.875 09:05:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:56.875 09:05:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.134 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:57.134 fio-3.35 00:30:57.134 Starting 1 thread 00:30:57.134 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.359 00:31:09.359 filename0: (groupid=0, jobs=1): err= 0: pid=2254155: Fri Apr 26 09:05:24 2024 00:31:09.359 read: IOPS=95, BW=380KiB/s (389kB/s)(3808KiB/10021msec) 00:31:09.359 slat (nsec): min=5491, max=38519, avg=5935.62, stdev=1524.60 00:31:09.359 clat (usec): min=41879, max=46200, avg=42085.18, stdev=385.65 00:31:09.359 lat (usec): min=41884, max=46226, avg=42091.12, stdev=386.03 00:31:09.359 clat percentiles (usec): 00:31:09.359 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:31:09.359 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:09.359 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:31:09.359 | 99.00th=[43254], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:31:09.359 | 99.99th=[46400] 00:31:09.359 bw ( KiB/s): min= 352, max= 384, per=99.74%, avg=379.20, stdev=11.72, samples=20 00:31:09.359 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:31:09.359 lat (msec) : 50=100.00% 00:31:09.359 cpu : usr=86.14%, sys=13.58%, ctx=14, majf=0, minf=214 00:31:09.359 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:09.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.360 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.360 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:09.360 00:31:09.360 Run status group 0 (all jobs): 00:31:09.360 READ: bw=380KiB/s (389kB/s), 380KiB/s-380KiB/s (389kB/s-389kB/s), io=3808KiB (3899kB), run=10021-10021msec 00:31:09.360 09:05:25 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:09.360 09:05:25 -- target/dif.sh@43 -- # local sub 00:31:09.360 09:05:25 -- target/dif.sh@45 -- # for sub in "$@" 00:31:09.360 09:05:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:09.360 09:05:25 -- target/dif.sh@36 -- # local sub_id=0 00:31:09.360 09:05:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:09.360 09:05:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 09:05:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.360 09:05:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:09.360 09:05:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 09:05:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.360 00:31:09.360 real 0m11.280s 00:31:09.360 user 0m17.522s 00:31:09.360 sys 0m1.783s 00:31:09.360 09:05:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 ************************************ 00:31:09.360 END TEST fio_dif_1_default 00:31:09.360 ************************************ 00:31:09.360 09:05:25 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:09.360 09:05:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:09.360 09:05:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 ************************************ 00:31:09.360 START TEST fio_dif_1_multi_subsystems 00:31:09.360 ************************************ 00:31:09.360 09:05:25 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:31:09.360 09:05:25 -- target/dif.sh@92 -- # local files=1 00:31:09.360 09:05:25 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:09.360 09:05:25 -- target/dif.sh@28 -- # local sub 00:31:09.360 09:05:25 -- target/dif.sh@30 -- # for sub in "$@" 00:31:09.360 09:05:25 -- target/dif.sh@31 -- # create_subsystem 0 00:31:09.360 09:05:25 -- target/dif.sh@18 -- # local sub_id=0 00:31:09.360 09:05:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:09.360 09:05:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 bdev_null0 00:31:09.360 09:05:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.360 09:05:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:09.360 09:05:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 09:05:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.360 09:05:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:09.360 09:05:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 09:05:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.360 09:05:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:09.360 09:05:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 [2024-04-26 09:05:25.276785] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.360 09:05:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.360 09:05:25 -- target/dif.sh@30 -- # for sub in "$@" 00:31:09.360 09:05:25 -- target/dif.sh@31 -- # create_subsystem 1 00:31:09.360 09:05:25 -- target/dif.sh@18 -- # local sub_id=1 00:31:09.360 09:05:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:09.360 09:05:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 bdev_null1 00:31:09.360 09:05:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.360 09:05:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:09.360 09:05:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 09:05:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.360 09:05:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:09.360 09:05:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 09:05:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.360 09:05:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:09.360 09:05:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:09.360 09:05:25 -- common/autotest_common.sh@10 -- # set +x 00:31:09.360 09:05:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:09.360 09:05:25 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:09.360 09:05:25 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:09.360 09:05:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:09.360 09:05:25 -- nvmf/common.sh@521 -- # config=() 00:31:09.360 09:05:25 -- nvmf/common.sh@521 -- # local subsystem config 00:31:09.360 09:05:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:09.360 09:05:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.360 09:05:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:09.360 { 00:31:09.360 "params": { 00:31:09.360 "name": "Nvme$subsystem", 00:31:09.360 "trtype": "$TEST_TRANSPORT", 00:31:09.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:09.360 "adrfam": "ipv4", 00:31:09.360 "trsvcid": "$NVMF_PORT", 00:31:09.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:09.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:09.360 "hdgst": ${hdgst:-false}, 00:31:09.360 "ddgst": ${ddgst:-false} 00:31:09.360 }, 00:31:09.360 "method": "bdev_nvme_attach_controller" 00:31:09.360 } 00:31:09.360 EOF 00:31:09.360 )") 00:31:09.360 09:05:25 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.360 09:05:25 -- target/dif.sh@82 -- # gen_fio_conf 00:31:09.360 09:05:25 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:09.360 09:05:25 -- target/dif.sh@54 -- # local file 00:31:09.360 09:05:25 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:09.360 09:05:25 -- target/dif.sh@56 -- # cat 00:31:09.360 09:05:25 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:09.360 09:05:25 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.360 09:05:25 -- common/autotest_common.sh@1327 -- # shift 00:31:09.360 09:05:25 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:09.360 09:05:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.360 09:05:25 -- nvmf/common.sh@543 -- # cat 00:31:09.360 09:05:25 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.360 09:05:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:09.360 09:05:25 -- target/dif.sh@72 -- # (( file <= files )) 00:31:09.360 09:05:25 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:09.360 09:05:25 -- target/dif.sh@73 -- # cat 00:31:09.360 09:05:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:09.360 09:05:25 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:09.360 09:05:25 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:09.360 { 00:31:09.360 "params": { 00:31:09.360 "name": "Nvme$subsystem", 00:31:09.360 "trtype": "$TEST_TRANSPORT", 00:31:09.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:09.360 "adrfam": "ipv4", 00:31:09.360 "trsvcid": "$NVMF_PORT", 00:31:09.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:09.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:09.360 "hdgst": ${hdgst:-false}, 00:31:09.360 "ddgst": ${ddgst:-false} 00:31:09.360 }, 00:31:09.360 "method": "bdev_nvme_attach_controller" 00:31:09.360 } 00:31:09.360 EOF 00:31:09.360 )") 00:31:09.360 09:05:25 -- target/dif.sh@72 -- # (( file++ )) 00:31:09.360 09:05:25 -- target/dif.sh@72 -- # (( file <= files )) 00:31:09.360 09:05:25 -- nvmf/common.sh@543 -- # cat 00:31:09.360 09:05:25 -- nvmf/common.sh@545 -- # jq . 00:31:09.360 09:05:25 -- nvmf/common.sh@546 -- # IFS=, 00:31:09.360 09:05:25 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:09.360 "params": { 00:31:09.360 "name": "Nvme0", 00:31:09.360 "trtype": "tcp", 00:31:09.360 "traddr": "10.0.0.2", 00:31:09.360 "adrfam": "ipv4", 00:31:09.360 "trsvcid": "4420", 00:31:09.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:09.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:09.360 "hdgst": false, 00:31:09.360 "ddgst": false 00:31:09.360 }, 00:31:09.360 "method": "bdev_nvme_attach_controller" 00:31:09.360 },{ 00:31:09.360 "params": { 00:31:09.360 "name": "Nvme1", 00:31:09.360 "trtype": "tcp", 00:31:09.360 "traddr": "10.0.0.2", 00:31:09.360 "adrfam": "ipv4", 00:31:09.360 "trsvcid": "4420", 00:31:09.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:09.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:09.360 "hdgst": false, 00:31:09.360 "ddgst": false 00:31:09.360 }, 00:31:09.360 "method": "bdev_nvme_attach_controller" 00:31:09.360 }' 00:31:09.360 09:05:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:09.360 09:05:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:09.360 09:05:25 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.360 09:05:25 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:09.360 09:05:25 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.360 09:05:25 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:09.360 09:05:25 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:09.361 09:05:25 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:09.361 09:05:25 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:09.361 09:05:25 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.361 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:09.361 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:09.361 fio-3.35 00:31:09.361 Starting 2 threads 00:31:09.361 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.325 00:31:19.325 filename0: (groupid=0, jobs=1): err= 0: pid=2256200: Fri Apr 26 09:05:36 2024 00:31:19.325 read: IOPS=94, BW=380KiB/s (389kB/s)(3808KiB/10023msec) 00:31:19.325 slat (nsec): min=5625, max=31413, avg=7402.28, stdev=2557.23 00:31:19.325 clat (usec): min=41846, max=44091, avg=42091.83, stdev=326.60 00:31:19.325 lat (usec): min=41852, max=44115, avg=42099.23, stdev=327.27 00:31:19.325 clat percentiles (usec): 00:31:19.325 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:19.325 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:19.325 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:31:19.325 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:31:19.325 | 99.99th=[44303] 00:31:19.325 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=379.20, stdev=11.72, samples=20 00:31:19.325 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:31:19.325 lat (msec) : 50=100.00% 00:31:19.325 cpu : usr=93.12%, sys=6.64%, ctx=9, majf=0, minf=146 00:31:19.325 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.325 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.325 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:19.325 filename1: (groupid=0, jobs=1): err= 0: pid=2256201: Fri Apr 26 09:05:36 2024 00:31:19.325 read: IOPS=94, BW=380KiB/s (389kB/s)(3808KiB/10025msec) 00:31:19.325 slat (nsec): min=5627, max=39706, avg=7383.31, stdev=2442.34 00:31:19.325 clat (usec): min=41768, max=44047, avg=42100.15, stdev=348.55 00:31:19.325 lat (usec): min=41774, max=44087, avg=42107.53, stdev=349.09 00:31:19.325 clat percentiles (usec): 00:31:19.325 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:19.325 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:19.325 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:31:19.325 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:31:19.325 | 99.99th=[44303] 00:31:19.325 bw ( KiB/s): min= 352, max= 384, per=49.89%, avg=379.20, stdev=11.72, samples=20 00:31:19.325 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:31:19.326 lat (msec) : 50=100.00% 00:31:19.326 cpu : usr=93.60%, sys=6.16%, ctx=14, majf=0, minf=34 00:31:19.326 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:19.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:19.326 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:19.326 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:19.326 00:31:19.326 Run status group 0 (all jobs): 00:31:19.326 READ: bw=760KiB/s (778kB/s), 380KiB/s-380KiB/s (389kB/s-389kB/s), io=7616KiB (7799kB), run=10023-10025msec 00:31:19.583 09:05:36 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:19.583 09:05:36 -- target/dif.sh@43 -- # local sub 00:31:19.583 09:05:36 -- target/dif.sh@45 -- # for sub in "$@" 00:31:19.583 09:05:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:19.583 09:05:36 -- target/dif.sh@36 -- # local sub_id=0 00:31:19.583 09:05:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:19.583 09:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.583 09:05:36 -- common/autotest_common.sh@10 -- # set +x 00:31:19.583 09:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.583 09:05:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:19.583 09:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.583 09:05:36 -- common/autotest_common.sh@10 -- # set +x 00:31:19.583 09:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.583 09:05:36 -- target/dif.sh@45 -- # for sub in "$@" 00:31:19.583 09:05:36 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:19.583 09:05:36 -- target/dif.sh@36 -- # local sub_id=1 00:31:19.583 09:05:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.583 09:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.583 09:05:36 -- common/autotest_common.sh@10 -- # set +x 00:31:19.583 09:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.583 09:05:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:19.583 09:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.583 09:05:36 -- common/autotest_common.sh@10 -- # set +x 00:31:19.583 09:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.583 00:31:19.583 real 0m11.447s 00:31:19.583 user 0m27.917s 00:31:19.583 sys 0m1.639s 00:31:19.583 09:05:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:19.583 09:05:36 -- common/autotest_common.sh@10 -- # set +x 00:31:19.583 ************************************ 00:31:19.583 END TEST fio_dif_1_multi_subsystems 00:31:19.583 ************************************ 00:31:19.583 09:05:36 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:19.583 09:05:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:19.583 09:05:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:19.583 09:05:36 -- common/autotest_common.sh@10 -- # set +x 00:31:19.841 ************************************ 00:31:19.841 START TEST fio_dif_rand_params 00:31:19.841 ************************************ 00:31:19.841 09:05:36 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:31:19.841 09:05:36 -- target/dif.sh@100 -- # local NULL_DIF 00:31:19.841 09:05:36 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:19.841 09:05:36 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:19.841 09:05:36 -- target/dif.sh@103 -- # bs=128k 00:31:19.841 09:05:36 -- target/dif.sh@103 -- # numjobs=3 00:31:19.841 09:05:36 -- target/dif.sh@103 -- # iodepth=3 00:31:19.841 09:05:36 -- target/dif.sh@103 -- # runtime=5 00:31:19.841 09:05:36 -- target/dif.sh@105 -- # create_subsystems 0 00:31:19.841 09:05:36 -- target/dif.sh@28 -- # local sub 00:31:19.841 09:05:36 -- target/dif.sh@30 -- # for sub in "$@" 00:31:19.841 09:05:36 -- target/dif.sh@31 -- # create_subsystem 0 00:31:19.841 09:05:36 -- target/dif.sh@18 -- # local sub_id=0 00:31:19.841 09:05:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:19.841 09:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.841 09:05:36 -- common/autotest_common.sh@10 -- # set +x 00:31:19.841 bdev_null0 00:31:19.841 09:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.841 09:05:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:19.841 09:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.841 09:05:36 -- common/autotest_common.sh@10 -- # set +x 00:31:19.841 09:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.841 09:05:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:19.841 09:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.841 09:05:36 -- common/autotest_common.sh@10 -- # set +x 00:31:19.841 09:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.841 09:05:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:19.841 09:05:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.841 09:05:36 -- common/autotest_common.sh@10 -- # set +x 00:31:19.841 [2024-04-26 09:05:36.931611] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:19.841 09:05:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:19.841 09:05:36 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:19.841 09:05:36 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:19.841 09:05:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:19.841 09:05:36 -- nvmf/common.sh@521 -- # config=() 00:31:19.841 09:05:36 -- nvmf/common.sh@521 -- # local subsystem config 00:31:19.841 09:05:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.841 09:05:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:19.841 09:05:36 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.841 09:05:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:19.841 { 00:31:19.841 "params": { 00:31:19.841 "name": "Nvme$subsystem", 00:31:19.841 "trtype": "$TEST_TRANSPORT", 00:31:19.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:19.841 "adrfam": "ipv4", 00:31:19.841 "trsvcid": "$NVMF_PORT", 00:31:19.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:19.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:19.841 "hdgst": ${hdgst:-false}, 00:31:19.841 "ddgst": ${ddgst:-false} 00:31:19.841 }, 00:31:19.841 "method": "bdev_nvme_attach_controller" 00:31:19.841 } 00:31:19.841 EOF 00:31:19.841 )") 00:31:19.841 09:05:36 -- target/dif.sh@82 -- # gen_fio_conf 00:31:19.841 09:05:36 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:19.841 09:05:36 -- target/dif.sh@54 -- # local file 00:31:19.841 09:05:36 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:19.841 09:05:36 -- target/dif.sh@56 -- # cat 00:31:19.841 09:05:36 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:19.841 09:05:36 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.841 09:05:36 -- common/autotest_common.sh@1327 -- # shift 00:31:19.841 09:05:36 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:19.841 09:05:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.841 09:05:36 -- nvmf/common.sh@543 -- # cat 00:31:19.841 09:05:36 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.841 09:05:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:19.841 09:05:36 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:19.841 09:05:36 -- target/dif.sh@72 -- # (( file <= files )) 00:31:19.841 09:05:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:19.841 09:05:36 -- nvmf/common.sh@545 -- # jq . 00:31:19.841 09:05:36 -- nvmf/common.sh@546 -- # IFS=, 00:31:19.841 09:05:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:19.841 "params": { 00:31:19.841 "name": "Nvme0", 00:31:19.841 "trtype": "tcp", 00:31:19.841 "traddr": "10.0.0.2", 00:31:19.841 "adrfam": "ipv4", 00:31:19.841 "trsvcid": "4420", 00:31:19.841 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:19.841 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:19.841 "hdgst": false, 00:31:19.841 "ddgst": false 00:31:19.841 }, 00:31:19.841 "method": "bdev_nvme_attach_controller" 00:31:19.841 }' 00:31:19.841 09:05:36 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:19.841 09:05:36 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:19.841 09:05:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:19.841 09:05:36 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:19.841 09:05:36 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:19.841 09:05:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:19.841 09:05:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:19.841 09:05:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:19.841 09:05:37 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:19.841 09:05:37 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.099 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:20.099 ... 00:31:20.099 fio-3.35 00:31:20.099 Starting 3 threads 00:31:20.099 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.657 00:31:26.657 filename0: (groupid=0, jobs=1): err= 0: pid=2258227: Fri Apr 26 09:05:42 2024 00:31:26.657 read: IOPS=178, BW=22.4MiB/s (23.5MB/s)(112MiB/5006msec) 00:31:26.657 slat (nsec): min=5703, max=25469, avg=8220.42, stdev=2474.81 00:31:26.657 clat (usec): min=4536, max=60445, avg=16747.15, stdev=16734.09 00:31:26.657 lat (usec): min=4543, max=60471, avg=16755.37, stdev=16734.31 00:31:26.657 clat percentiles (usec): 00:31:26.657 | 1.00th=[ 5604], 5.00th=[ 5866], 10.00th=[ 6128], 20.00th=[ 6849], 00:31:26.657 | 30.00th=[ 7701], 40.00th=[ 8586], 50.00th=[ 9503], 60.00th=[10945], 00:31:26.657 | 70.00th=[13304], 80.00th=[16057], 90.00th=[52691], 95.00th=[56361], 00:31:26.657 | 99.00th=[58983], 99.50th=[59507], 99.90th=[60556], 99.95th=[60556], 00:31:26.657 | 99.99th=[60556] 00:31:26.657 bw ( KiB/s): min=13824, max=33792, per=26.61%, avg=22860.80, stdev=6948.30, samples=10 00:31:26.657 iops : min= 108, max= 264, avg=178.60, stdev=54.28, samples=10 00:31:26.657 lat (msec) : 10=54.24%, 20=29.69%, 50=2.01%, 100=14.06% 00:31:26.657 cpu : usr=92.29%, sys=7.27%, ctx=8, majf=0, minf=70 00:31:26.657 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.657 issued rwts: total=896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.657 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.657 filename0: (groupid=0, jobs=1): err= 0: pid=2258228: Fri Apr 26 09:05:42 2024 00:31:26.657 read: IOPS=302, BW=37.9MiB/s (39.7MB/s)(190MiB/5006msec) 00:31:26.657 slat (nsec): min=5699, max=25558, avg=8030.67, stdev=2327.29 00:31:26.657 clat (usec): min=4945, max=55749, avg=9890.60, stdev=10129.95 00:31:26.657 lat (usec): min=4954, max=55760, avg=9898.63, stdev=10130.16 00:31:26.657 clat percentiles (usec): 00:31:26.657 | 1.00th=[ 5342], 5.00th=[ 5604], 10.00th=[ 5735], 20.00th=[ 5866], 00:31:26.657 | 30.00th=[ 6259], 40.00th=[ 6587], 50.00th=[ 6980], 60.00th=[ 7504], 00:31:26.657 | 70.00th=[ 8094], 80.00th=[ 8979], 90.00th=[11600], 95.00th=[47973], 00:31:26.657 | 99.00th=[52167], 99.50th=[53216], 99.90th=[55837], 99.95th=[55837], 00:31:26.657 | 99.99th=[55837] 00:31:26.657 bw ( KiB/s): min=26368, max=49664, per=45.08%, avg=38732.80, stdev=8353.13, samples=10 00:31:26.657 iops : min= 206, max= 388, avg=302.60, stdev=65.26, samples=10 00:31:26.657 lat (msec) : 10=84.23%, 20=10.03%, 50=2.57%, 100=3.17% 00:31:26.657 cpu : usr=91.63%, sys=7.89%, ctx=6, majf=0, minf=84 00:31:26.657 IO depths : 1=2.2%, 2=97.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.657 issued rwts: total=1516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.657 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.657 filename0: (groupid=0, jobs=1): err= 0: pid=2258229: Fri Apr 26 09:05:42 2024 00:31:26.657 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(119MiB/5002msec) 00:31:26.657 slat (nsec): min=5657, max=24086, avg=8275.41, stdev=2582.85 00:31:26.657 clat (usec): min=4584, max=61523, avg=15813.72, stdev=15717.37 00:31:26.657 lat (usec): min=4590, max=61529, avg=15821.99, stdev=15717.51 00:31:26.657 clat percentiles (usec): 00:31:26.657 | 1.00th=[ 5604], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7046], 00:31:26.657 | 30.00th=[ 7701], 40.00th=[ 8356], 50.00th=[ 9241], 60.00th=[10290], 00:31:26.657 | 70.00th=[11731], 80.00th=[14877], 90.00th=[50594], 95.00th=[53740], 00:31:26.657 | 99.00th=[56361], 99.50th=[58983], 99.90th=[61604], 99.95th=[61604], 00:31:26.657 | 99.99th=[61604] 00:31:26.657 bw ( KiB/s): min=16128, max=38400, per=27.81%, avg=23893.33, stdev=7229.45, samples=9 00:31:26.657 iops : min= 126, max= 300, avg=186.67, stdev=56.48, samples=9 00:31:26.657 lat (msec) : 10=56.22%, 20=28.27%, 50=4.11%, 100=11.39% 00:31:26.657 cpu : usr=92.48%, sys=7.14%, ctx=6, majf=0, minf=97 00:31:26.657 IO depths : 1=5.1%, 2=94.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.657 issued rwts: total=948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.657 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.657 00:31:26.657 Run status group 0 (all jobs): 00:31:26.657 READ: bw=83.9MiB/s (88.0MB/s), 22.4MiB/s-37.9MiB/s (23.5MB/s-39.7MB/s), io=420MiB (440MB), run=5002-5006msec 00:31:26.657 09:05:42 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:26.657 09:05:42 -- target/dif.sh@43 -- # local sub 00:31:26.657 09:05:42 -- target/dif.sh@45 -- # for sub in "$@" 00:31:26.657 09:05:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:26.657 09:05:42 -- target/dif.sh@36 -- # local sub_id=0 00:31:26.657 09:05:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:26.657 09:05:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:42 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 09:05:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:26.657 09:05:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:42 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:26.657 09:05:43 -- target/dif.sh@109 -- # bs=4k 00:31:26.657 09:05:43 -- target/dif.sh@109 -- # numjobs=8 00:31:26.657 09:05:43 -- target/dif.sh@109 -- # iodepth=16 00:31:26.657 09:05:43 -- target/dif.sh@109 -- # runtime= 00:31:26.657 09:05:43 -- target/dif.sh@109 -- # files=2 00:31:26.657 09:05:43 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:26.657 09:05:43 -- target/dif.sh@28 -- # local sub 00:31:26.657 09:05:43 -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.657 09:05:43 -- target/dif.sh@31 -- # create_subsystem 0 00:31:26.657 09:05:43 -- target/dif.sh@18 -- # local sub_id=0 00:31:26.657 09:05:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 bdev_null0 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 [2024-04-26 09:05:43.037884] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.657 09:05:43 -- target/dif.sh@31 -- # create_subsystem 1 00:31:26.657 09:05:43 -- target/dif.sh@18 -- # local sub_id=1 00:31:26.657 09:05:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 bdev_null1 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.657 09:05:43 -- target/dif.sh@31 -- # create_subsystem 2 00:31:26.657 09:05:43 -- target/dif.sh@18 -- # local sub_id=2 00:31:26.657 09:05:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 bdev_null2 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:26.657 09:05:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:26.657 09:05:43 -- common/autotest_common.sh@10 -- # set +x 00:31:26.657 09:05:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:26.657 09:05:43 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:26.657 09:05:43 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:26.657 09:05:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:26.657 09:05:43 -- nvmf/common.sh@521 -- # config=() 00:31:26.657 09:05:43 -- nvmf/common.sh@521 -- # local subsystem config 00:31:26.657 09:05:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.657 09:05:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:26.657 09:05:43 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.657 09:05:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:26.657 { 00:31:26.657 "params": { 00:31:26.657 "name": "Nvme$subsystem", 00:31:26.657 "trtype": "$TEST_TRANSPORT", 00:31:26.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.657 "adrfam": "ipv4", 00:31:26.657 "trsvcid": "$NVMF_PORT", 00:31:26.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.657 "hdgst": ${hdgst:-false}, 00:31:26.657 "ddgst": ${ddgst:-false} 00:31:26.657 }, 00:31:26.657 "method": "bdev_nvme_attach_controller" 00:31:26.657 } 00:31:26.657 EOF 00:31:26.657 )") 00:31:26.657 09:05:43 -- target/dif.sh@82 -- # gen_fio_conf 00:31:26.657 09:05:43 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:26.657 09:05:43 -- target/dif.sh@54 -- # local file 00:31:26.657 09:05:43 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.657 09:05:43 -- target/dif.sh@56 -- # cat 00:31:26.657 09:05:43 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:26.657 09:05:43 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.657 09:05:43 -- common/autotest_common.sh@1327 -- # shift 00:31:26.657 09:05:43 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:26.657 09:05:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.657 09:05:43 -- nvmf/common.sh@543 -- # cat 00:31:26.657 09:05:43 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.657 09:05:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:26.657 09:05:43 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:26.657 09:05:43 -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.657 09:05:43 -- target/dif.sh@73 -- # cat 00:31:26.657 09:05:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:26.657 09:05:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:26.657 09:05:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:26.657 { 00:31:26.657 "params": { 00:31:26.657 "name": "Nvme$subsystem", 00:31:26.657 "trtype": "$TEST_TRANSPORT", 00:31:26.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.657 "adrfam": "ipv4", 00:31:26.657 "trsvcid": "$NVMF_PORT", 00:31:26.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.657 "hdgst": ${hdgst:-false}, 00:31:26.657 "ddgst": ${ddgst:-false} 00:31:26.657 }, 00:31:26.657 "method": "bdev_nvme_attach_controller" 00:31:26.657 } 00:31:26.657 EOF 00:31:26.657 )") 00:31:26.657 09:05:43 -- target/dif.sh@72 -- # (( file++ )) 00:31:26.657 09:05:43 -- nvmf/common.sh@543 -- # cat 00:31:26.657 09:05:43 -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.657 09:05:43 -- target/dif.sh@73 -- # cat 00:31:26.657 09:05:43 -- target/dif.sh@72 -- # (( file++ )) 00:31:26.657 09:05:43 -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.657 09:05:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:26.657 09:05:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:26.657 { 00:31:26.657 "params": { 00:31:26.657 "name": "Nvme$subsystem", 00:31:26.657 "trtype": "$TEST_TRANSPORT", 00:31:26.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.657 "adrfam": "ipv4", 00:31:26.657 "trsvcid": "$NVMF_PORT", 00:31:26.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.657 "hdgst": ${hdgst:-false}, 00:31:26.657 "ddgst": ${ddgst:-false} 00:31:26.657 }, 00:31:26.657 "method": "bdev_nvme_attach_controller" 00:31:26.657 } 00:31:26.657 EOF 00:31:26.657 )") 00:31:26.657 09:05:43 -- nvmf/common.sh@543 -- # cat 00:31:26.657 09:05:43 -- nvmf/common.sh@545 -- # jq . 00:31:26.657 09:05:43 -- nvmf/common.sh@546 -- # IFS=, 00:31:26.657 09:05:43 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:26.657 09:05:43 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:26.657 09:05:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:26.657 "params": { 00:31:26.657 "name": "Nvme0", 00:31:26.657 "trtype": "tcp", 00:31:26.657 "traddr": "10.0.0.2", 00:31:26.657 "adrfam": "ipv4", 00:31:26.657 "trsvcid": "4420", 00:31:26.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:26.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:26.657 "hdgst": false, 00:31:26.657 "ddgst": false 00:31:26.657 }, 00:31:26.657 "method": "bdev_nvme_attach_controller" 00:31:26.657 },{ 00:31:26.657 "params": { 00:31:26.657 "name": "Nvme1", 00:31:26.657 "trtype": "tcp", 00:31:26.657 "traddr": "10.0.0.2", 00:31:26.657 "adrfam": "ipv4", 00:31:26.657 "trsvcid": "4420", 00:31:26.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:26.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:26.657 "hdgst": false, 00:31:26.657 "ddgst": false 00:31:26.657 }, 00:31:26.657 "method": "bdev_nvme_attach_controller" 00:31:26.657 },{ 00:31:26.657 "params": { 00:31:26.657 "name": "Nvme2", 00:31:26.657 "trtype": "tcp", 00:31:26.657 "traddr": "10.0.0.2", 00:31:26.657 "adrfam": "ipv4", 00:31:26.657 "trsvcid": "4420", 00:31:26.657 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:26.657 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:26.657 "hdgst": false, 00:31:26.657 "ddgst": false 00:31:26.657 }, 00:31:26.657 "method": "bdev_nvme_attach_controller" 00:31:26.657 }' 00:31:26.657 09:05:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.657 09:05:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:26.657 09:05:43 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.657 09:05:43 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:26.657 09:05:43 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:26.657 09:05:43 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:26.657 09:05:43 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:26.657 09:05:43 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.657 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:26.657 ... 00:31:26.657 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:26.657 ... 00:31:26.657 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:26.658 ... 00:31:26.658 fio-3.35 00:31:26.658 Starting 24 threads 00:31:26.658 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.928 00:31:38.928 filename0: (groupid=0, jobs=1): err= 0: pid=2259432: Fri Apr 26 09:05:54 2024 00:31:38.928 read: IOPS=587, BW=2351KiB/s (2407kB/s)(23.0MiB/10018msec) 00:31:38.928 slat (nsec): min=6110, max=78339, avg=13084.88, stdev=7879.83 00:31:38.928 clat (usec): min=4030, max=49542, avg=27139.55, stdev=5771.72 00:31:38.928 lat (usec): min=4043, max=49548, avg=27152.64, stdev=5772.67 00:31:38.928 clat percentiles (usec): 00:31:38.928 | 1.00th=[ 6980], 5.00th=[19792], 10.00th=[22938], 20.00th=[23725], 00:31:38.928 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25297], 60.00th=[27132], 00:31:38.928 | 70.00th=[30016], 80.00th=[31851], 90.00th=[34341], 95.00th=[36963], 00:31:38.928 | 99.00th=[42730], 99.50th=[43779], 99.90th=[49021], 99.95th=[49546], 00:31:38.928 | 99.99th=[49546] 00:31:38.928 bw ( KiB/s): min= 2200, max= 2872, per=4.11%, avg=2348.80, stdev=147.84, samples=20 00:31:38.928 iops : min= 550, max= 718, avg=587.20, stdev=36.96, samples=20 00:31:38.928 lat (msec) : 10=1.41%, 20=3.60%, 50=94.99% 00:31:38.928 cpu : usr=97.11%, sys=2.45%, ctx=21, majf=0, minf=67 00:31:38.928 IO depths : 1=0.6%, 2=1.4%, 4=8.7%, 8=76.4%, 16=12.9%, 32=0.0%, >=64=0.0% 00:31:38.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 issued rwts: total=5888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.928 filename0: (groupid=0, jobs=1): err= 0: pid=2259433: Fri Apr 26 09:05:54 2024 00:31:38.928 read: IOPS=603, BW=2415KiB/s (2473kB/s)(23.6MiB/10019msec) 00:31:38.928 slat (nsec): min=6209, max=84007, avg=17923.94, stdev=10947.27 00:31:38.928 clat (usec): min=7216, max=49522, avg=26404.79, stdev=4670.36 00:31:38.928 lat (usec): min=7230, max=49532, avg=26422.72, stdev=4669.96 00:31:38.928 clat percentiles (usec): 00:31:38.928 | 1.00th=[16581], 5.00th=[21627], 10.00th=[22676], 20.00th=[23725], 00:31:38.928 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25560], 00:31:38.928 | 70.00th=[26608], 80.00th=[30016], 90.00th=[32637], 95.00th=[35390], 00:31:38.928 | 99.00th=[42206], 99.50th=[44827], 99.90th=[49546], 99.95th=[49546], 00:31:38.928 | 99.99th=[49546] 00:31:38.928 bw ( KiB/s): min= 2096, max= 2560, per=4.22%, avg=2412.80, stdev=102.20, samples=20 00:31:38.928 iops : min= 524, max= 640, avg=603.20, stdev=25.55, samples=20 00:31:38.928 lat (msec) : 10=0.08%, 20=3.42%, 50=96.49% 00:31:38.928 cpu : usr=97.33%, sys=2.20%, ctx=20, majf=0, minf=70 00:31:38.928 IO depths : 1=0.1%, 2=0.5%, 4=6.8%, 8=78.0%, 16=14.5%, 32=0.0%, >=64=0.0% 00:31:38.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 complete : 0=0.0%, 4=90.2%, 8=5.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.928 filename0: (groupid=0, jobs=1): err= 0: pid=2259434: Fri Apr 26 09:05:54 2024 00:31:38.928 read: IOPS=603, BW=2415KiB/s (2473kB/s)(23.6MiB/10026msec) 00:31:38.928 slat (usec): min=6, max=759, avg=29.32, stdev=25.67 00:31:38.928 clat (usec): min=4952, max=56093, avg=26330.94, stdev=5175.74 00:31:38.928 lat (usec): min=4961, max=56117, avg=26360.26, stdev=5175.44 00:31:38.928 clat percentiles (usec): 00:31:38.928 | 1.00th=[14222], 5.00th=[19530], 10.00th=[22414], 20.00th=[23462], 00:31:38.928 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25560], 00:31:38.928 | 70.00th=[27657], 80.00th=[30540], 90.00th=[32900], 95.00th=[35390], 00:31:38.928 | 99.00th=[41681], 99.50th=[45351], 99.90th=[49546], 99.95th=[55837], 00:31:38.928 | 99.99th=[55837] 00:31:38.928 bw ( KiB/s): min= 2048, max= 2664, per=4.23%, avg=2417.20, stdev=130.61, samples=20 00:31:38.928 iops : min= 512, max= 666, avg=604.30, stdev=32.65, samples=20 00:31:38.928 lat (msec) : 10=0.53%, 20=4.86%, 50=94.53%, 100=0.08% 00:31:38.928 cpu : usr=90.81%, sys=4.27%, ctx=288, majf=0, minf=58 00:31:38.928 IO depths : 1=0.2%, 2=0.6%, 4=7.5%, 8=77.5%, 16=14.2%, 32=0.0%, >=64=0.0% 00:31:38.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 complete : 0=0.0%, 4=90.2%, 8=5.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 issued rwts: total=6053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.928 filename0: (groupid=0, jobs=1): err= 0: pid=2259435: Fri Apr 26 09:05:54 2024 00:31:38.928 read: IOPS=524, BW=2099KiB/s (2149kB/s)(20.5MiB/10007msec) 00:31:38.928 slat (nsec): min=6360, max=74579, avg=18209.08, stdev=9826.92 00:31:38.928 clat (usec): min=13161, max=68925, avg=30357.54, stdev=5414.13 00:31:38.928 lat (usec): min=13182, max=68937, avg=30375.75, stdev=5415.16 00:31:38.928 clat percentiles (usec): 00:31:38.928 | 1.00th=[21890], 5.00th=[23200], 10.00th=[23987], 20.00th=[24773], 00:31:38.928 | 30.00th=[26346], 40.00th=[29754], 50.00th=[30802], 60.00th=[31851], 00:31:38.928 | 70.00th=[32375], 80.00th=[33817], 90.00th=[36439], 95.00th=[40109], 00:31:38.928 | 99.00th=[43779], 99.50th=[44827], 99.90th=[68682], 99.95th=[68682], 00:31:38.928 | 99.99th=[68682] 00:31:38.928 bw ( KiB/s): min= 1920, max= 2472, per=3.68%, avg=2102.74, stdev=139.34, samples=19 00:31:38.928 iops : min= 480, max= 618, avg=525.68, stdev=34.83, samples=19 00:31:38.928 lat (msec) : 20=0.67%, 50=99.03%, 100=0.30% 00:31:38.928 cpu : usr=97.18%, sys=2.39%, ctx=15, majf=0, minf=50 00:31:38.928 IO depths : 1=3.4%, 2=7.3%, 4=19.6%, 8=60.4%, 16=9.3%, 32=0.0%, >=64=0.0% 00:31:38.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 complete : 0=0.0%, 4=92.0%, 8=2.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 issued rwts: total=5250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.928 filename0: (groupid=0, jobs=1): err= 0: pid=2259436: Fri Apr 26 09:05:54 2024 00:31:38.928 read: IOPS=594, BW=2379KiB/s (2436kB/s)(23.2MiB/10004msec) 00:31:38.928 slat (nsec): min=6044, max=78606, avg=18756.24, stdev=11097.34 00:31:38.928 clat (usec): min=11320, max=51865, avg=26796.34, stdev=4874.39 00:31:38.928 lat (usec): min=11332, max=51884, avg=26815.09, stdev=4873.33 00:31:38.928 clat percentiles (usec): 00:31:38.928 | 1.00th=[15401], 5.00th=[21365], 10.00th=[22676], 20.00th=[23725], 00:31:38.928 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25297], 60.00th=[25822], 00:31:38.928 | 70.00th=[28967], 80.00th=[31065], 90.00th=[33424], 95.00th=[35390], 00:31:38.928 | 99.00th=[40633], 99.50th=[42206], 99.90th=[45876], 99.95th=[51643], 00:31:38.928 | 99.99th=[51643] 00:31:38.928 bw ( KiB/s): min= 1952, max= 2528, per=4.13%, avg=2364.63, stdev=140.66, samples=19 00:31:38.928 iops : min= 488, max= 632, avg=591.16, stdev=35.17, samples=19 00:31:38.928 lat (msec) : 20=3.63%, 50=96.29%, 100=0.08% 00:31:38.928 cpu : usr=97.07%, sys=2.50%, ctx=16, majf=0, minf=58 00:31:38.928 IO depths : 1=0.4%, 2=0.9%, 4=8.9%, 8=76.8%, 16=13.0%, 32=0.0%, >=64=0.0% 00:31:38.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 issued rwts: total=5950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.928 filename0: (groupid=0, jobs=1): err= 0: pid=2259437: Fri Apr 26 09:05:54 2024 00:31:38.928 read: IOPS=659, BW=2637KiB/s (2700kB/s)(25.8MiB/10004msec) 00:31:38.928 slat (nsec): min=6080, max=84413, avg=19236.99, stdev=11843.81 00:31:38.928 clat (usec): min=6491, max=45444, avg=24120.80, stdev=3482.71 00:31:38.928 lat (usec): min=6499, max=45460, avg=24140.04, stdev=3484.03 00:31:38.928 clat percentiles (usec): 00:31:38.928 | 1.00th=[13960], 5.00th=[17957], 10.00th=[21365], 20.00th=[22938], 00:31:38.928 | 30.00th=[23462], 40.00th=[23725], 50.00th=[24249], 60.00th=[24511], 00:31:38.928 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26346], 95.00th=[30540], 00:31:38.928 | 99.00th=[35390], 99.50th=[39060], 99.90th=[42206], 99.95th=[45351], 00:31:38.928 | 99.99th=[45351] 00:31:38.928 bw ( KiB/s): min= 2432, max= 2848, per=4.60%, avg=2629.05, stdev=93.80, samples=19 00:31:38.928 iops : min= 608, max= 712, avg=657.26, stdev=23.45, samples=19 00:31:38.928 lat (msec) : 10=0.24%, 20=7.57%, 50=92.19% 00:31:38.928 cpu : usr=97.33%, sys=2.25%, ctx=17, majf=0, minf=40 00:31:38.928 IO depths : 1=3.2%, 2=7.1%, 4=20.9%, 8=59.2%, 16=9.6%, 32=0.0%, >=64=0.0% 00:31:38.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 issued rwts: total=6594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.928 filename0: (groupid=0, jobs=1): err= 0: pid=2259438: Fri Apr 26 09:05:54 2024 00:31:38.928 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.4MiB/10013msec) 00:31:38.928 slat (nsec): min=6228, max=80211, avg=19138.87, stdev=11734.35 00:31:38.928 clat (usec): min=11865, max=64208, avg=27803.37, stdev=5562.18 00:31:38.928 lat (usec): min=11912, max=64234, avg=27822.50, stdev=5560.93 00:31:38.928 clat percentiles (usec): 00:31:38.928 | 1.00th=[16057], 5.00th=[21890], 10.00th=[23200], 20.00th=[23987], 00:31:38.928 | 30.00th=[24511], 40.00th=[25035], 50.00th=[25560], 60.00th=[28443], 00:31:38.928 | 70.00th=[30802], 80.00th=[32375], 90.00th=[33817], 95.00th=[36439], 00:31:38.928 | 99.00th=[47449], 99.50th=[49021], 99.90th=[57934], 99.95th=[64226], 00:31:38.928 | 99.99th=[64226] 00:31:38.928 bw ( KiB/s): min= 2032, max= 2560, per=3.99%, avg=2284.21, stdev=125.54, samples=19 00:31:38.928 iops : min= 508, max= 640, avg=571.05, stdev=31.39, samples=19 00:31:38.928 lat (msec) : 20=3.07%, 50=96.64%, 100=0.30% 00:31:38.928 cpu : usr=97.42%, sys=2.15%, ctx=66, majf=0, minf=96 00:31:38.928 IO depths : 1=0.3%, 2=0.9%, 4=8.6%, 8=76.9%, 16=13.3%, 32=0.0%, >=64=0.0% 00:31:38.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.928 issued rwts: total=5739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.929 filename0: (groupid=0, jobs=1): err= 0: pid=2259439: Fri Apr 26 09:05:54 2024 00:31:38.929 read: IOPS=607, BW=2429KiB/s (2487kB/s)(23.7MiB/10011msec) 00:31:38.929 slat (nsec): min=6258, max=75785, avg=16750.61, stdev=9764.32 00:31:38.929 clat (usec): min=11015, max=51305, avg=26249.51, stdev=4906.51 00:31:38.929 lat (usec): min=11022, max=51316, avg=26266.26, stdev=4907.30 00:31:38.929 clat percentiles (usec): 00:31:38.929 | 1.00th=[14877], 5.00th=[19792], 10.00th=[22414], 20.00th=[23462], 00:31:38.929 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:31:38.929 | 70.00th=[26346], 80.00th=[30802], 90.00th=[33162], 95.00th=[34866], 00:31:38.929 | 99.00th=[40109], 99.50th=[43254], 99.90th=[48497], 99.95th=[48497], 00:31:38.929 | 99.99th=[51119] 00:31:38.929 bw ( KiB/s): min= 2176, max= 2544, per=4.25%, avg=2429.20, stdev=85.39, samples=20 00:31:38.929 iops : min= 544, max= 636, avg=607.30, stdev=21.35, samples=20 00:31:38.929 lat (msec) : 20=5.40%, 50=94.56%, 100=0.05% 00:31:38.929 cpu : usr=97.25%, sys=2.32%, ctx=17, majf=0, minf=73 00:31:38.929 IO depths : 1=0.5%, 2=1.2%, 4=8.1%, 8=77.2%, 16=12.9%, 32=0.0%, >=64=0.0% 00:31:38.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 issued rwts: total=6079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.929 filename1: (groupid=0, jobs=1): err= 0: pid=2259440: Fri Apr 26 09:05:54 2024 00:31:38.929 read: IOPS=596, BW=2387KiB/s (2445kB/s)(23.3MiB/10010msec) 00:31:38.929 slat (nsec): min=5969, max=72942, avg=15059.76, stdev=10693.22 00:31:38.929 clat (usec): min=9294, max=52139, avg=26725.65, stdev=5068.22 00:31:38.929 lat (usec): min=9302, max=52155, avg=26740.71, stdev=5067.61 00:31:38.929 clat percentiles (usec): 00:31:38.929 | 1.00th=[15795], 5.00th=[19792], 10.00th=[22152], 20.00th=[23462], 00:31:38.929 | 30.00th=[23987], 40.00th=[24773], 50.00th=[25297], 60.00th=[26084], 00:31:38.929 | 70.00th=[28705], 80.00th=[31065], 90.00th=[33424], 95.00th=[35390], 00:31:38.929 | 99.00th=[42730], 99.50th=[44303], 99.90th=[46400], 99.95th=[52167], 00:31:38.929 | 99.99th=[52167] 00:31:38.929 bw ( KiB/s): min= 2160, max= 2592, per=4.17%, avg=2383.16, stdev=133.25, samples=19 00:31:38.929 iops : min= 540, max= 648, avg=595.79, stdev=33.31, samples=19 00:31:38.929 lat (msec) : 10=0.05%, 20=5.22%, 50=94.64%, 100=0.08% 00:31:38.929 cpu : usr=97.39%, sys=2.15%, ctx=14, majf=0, minf=40 00:31:38.929 IO depths : 1=0.1%, 2=0.4%, 4=6.5%, 8=77.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:31:38.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 complete : 0=0.0%, 4=90.2%, 8=6.5%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 issued rwts: total=5974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.929 filename1: (groupid=0, jobs=1): err= 0: pid=2259441: Fri Apr 26 09:05:54 2024 00:31:38.929 read: IOPS=597, BW=2390KiB/s (2447kB/s)(23.4MiB/10019msec) 00:31:38.929 slat (nsec): min=4850, max=74566, avg=19502.94, stdev=11743.66 00:31:38.929 clat (usec): min=11794, max=47792, avg=26658.55, stdev=4766.67 00:31:38.929 lat (usec): min=11802, max=47799, avg=26678.05, stdev=4766.38 00:31:38.929 clat percentiles (usec): 00:31:38.929 | 1.00th=[15008], 5.00th=[21103], 10.00th=[22676], 20.00th=[23725], 00:31:38.929 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25822], 00:31:38.929 | 70.00th=[28705], 80.00th=[31065], 90.00th=[33424], 95.00th=[34866], 00:31:38.929 | 99.00th=[40109], 99.50th=[42206], 99.90th=[46400], 99.95th=[47449], 00:31:38.929 | 99.99th=[47973] 00:31:38.929 bw ( KiB/s): min= 2096, max= 2560, per=4.18%, avg=2388.00, stdev=126.80, samples=20 00:31:38.929 iops : min= 524, max= 640, avg=597.00, stdev=31.70, samples=20 00:31:38.929 lat (msec) : 20=4.31%, 50=95.69% 00:31:38.929 cpu : usr=96.93%, sys=2.64%, ctx=18, majf=0, minf=62 00:31:38.929 IO depths : 1=1.0%, 2=2.0%, 4=10.0%, 8=74.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:31:38.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 complete : 0=0.0%, 4=90.3%, 8=4.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 issued rwts: total=5986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.929 filename1: (groupid=0, jobs=1): err= 0: pid=2259442: Fri Apr 26 09:05:54 2024 00:31:38.929 read: IOPS=577, BW=2312KiB/s (2367kB/s)(22.6MiB/10011msec) 00:31:38.929 slat (nsec): min=6192, max=73716, avg=18326.96, stdev=11114.65 00:31:38.929 clat (usec): min=12135, max=49420, avg=27573.31, stdev=5241.17 00:31:38.929 lat (usec): min=12146, max=49428, avg=27591.64, stdev=5241.24 00:31:38.929 clat percentiles (usec): 00:31:38.929 | 1.00th=[16188], 5.00th=[21890], 10.00th=[22938], 20.00th=[23725], 00:31:38.929 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[27657], 00:31:38.929 | 70.00th=[30540], 80.00th=[32113], 90.00th=[34866], 95.00th=[36963], 00:31:38.929 | 99.00th=[44827], 99.50th=[46924], 99.90th=[47973], 99.95th=[48497], 00:31:38.929 | 99.99th=[49546] 00:31:38.929 bw ( KiB/s): min= 2048, max= 2480, per=4.04%, avg=2312.00, stdev=120.13, samples=20 00:31:38.929 iops : min= 512, max= 620, avg=578.00, stdev=30.03, samples=20 00:31:38.929 lat (msec) : 20=3.40%, 50=96.60% 00:31:38.929 cpu : usr=97.54%, sys=2.02%, ctx=18, majf=0, minf=51 00:31:38.929 IO depths : 1=0.6%, 2=1.3%, 4=9.4%, 8=76.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:31:38.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 complete : 0=0.0%, 4=90.2%, 8=4.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 issued rwts: total=5786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.929 filename1: (groupid=0, jobs=1): err= 0: pid=2259443: Fri Apr 26 09:05:54 2024 00:31:38.929 read: IOPS=586, BW=2345KiB/s (2401kB/s)(22.9MiB/10003msec) 00:31:38.929 slat (nsec): min=6146, max=83523, avg=17943.67, stdev=11519.99 00:31:38.929 clat (usec): min=7327, max=48699, avg=27192.26, stdev=5189.76 00:31:38.929 lat (usec): min=7335, max=48706, avg=27210.20, stdev=5188.35 00:31:38.929 clat percentiles (usec): 00:31:38.929 | 1.00th=[14091], 5.00th=[20841], 10.00th=[22676], 20.00th=[23725], 00:31:38.929 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25560], 60.00th=[27132], 00:31:38.929 | 70.00th=[30278], 80.00th=[31851], 90.00th=[33817], 95.00th=[36439], 00:31:38.929 | 99.00th=[41681], 99.50th=[44303], 99.90th=[45876], 99.95th=[48497], 00:31:38.929 | 99.99th=[48497] 00:31:38.929 bw ( KiB/s): min= 1952, max= 2560, per=4.07%, avg=2327.79, stdev=161.66, samples=19 00:31:38.929 iops : min= 488, max= 640, avg=581.95, stdev=40.42, samples=19 00:31:38.929 lat (msec) : 10=0.27%, 20=4.42%, 50=95.31% 00:31:38.929 cpu : usr=97.25%, sys=2.32%, ctx=20, majf=0, minf=66 00:31:38.929 IO depths : 1=0.5%, 2=1.3%, 4=9.7%, 8=75.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:31:38.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 complete : 0=0.0%, 4=90.6%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 issued rwts: total=5864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.929 filename1: (groupid=0, jobs=1): err= 0: pid=2259444: Fri Apr 26 09:05:54 2024 00:31:38.929 read: IOPS=657, BW=2631KiB/s (2694kB/s)(25.7MiB/10008msec) 00:31:38.929 slat (nsec): min=2959, max=65897, avg=12273.39, stdev=7312.20 00:31:38.929 clat (usec): min=3751, max=43742, avg=24245.90, stdev=4139.75 00:31:38.929 lat (usec): min=3759, max=43769, avg=24258.17, stdev=4141.05 00:31:38.929 clat percentiles (usec): 00:31:38.929 | 1.00th=[ 7373], 5.00th=[16909], 10.00th=[21890], 20.00th=[23200], 00:31:38.929 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:31:38.929 | 70.00th=[25035], 80.00th=[25297], 90.00th=[27132], 95.00th=[31589], 00:31:38.929 | 99.00th=[35914], 99.50th=[37487], 99.90th=[43254], 99.95th=[43779], 00:31:38.929 | 99.99th=[43779] 00:31:38.929 bw ( KiB/s): min= 2464, max= 3072, per=4.60%, avg=2630.40, stdev=122.56, samples=20 00:31:38.929 iops : min= 616, max= 768, avg=657.60, stdev=30.64, samples=20 00:31:38.929 lat (msec) : 4=0.09%, 10=1.52%, 20=5.91%, 50=92.48% 00:31:38.929 cpu : usr=97.01%, sys=2.56%, ctx=14, majf=0, minf=69 00:31:38.929 IO depths : 1=0.9%, 2=1.8%, 4=9.0%, 8=76.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:31:38.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 complete : 0=0.0%, 4=90.1%, 8=4.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 issued rwts: total=6582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.929 filename1: (groupid=0, jobs=1): err= 0: pid=2259445: Fri Apr 26 09:05:54 2024 00:31:38.929 read: IOPS=596, BW=2387KiB/s (2444kB/s)(23.3MiB/10014msec) 00:31:38.929 slat (nsec): min=6240, max=71797, avg=16899.77, stdev=9978.92 00:31:38.929 clat (usec): min=8276, max=45668, avg=26711.76, stdev=5011.23 00:31:38.929 lat (usec): min=8288, max=45702, avg=26728.66, stdev=5012.12 00:31:38.929 clat percentiles (usec): 00:31:38.929 | 1.00th=[14353], 5.00th=[21103], 10.00th=[22676], 20.00th=[23725], 00:31:38.929 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25035], 60.00th=[25560], 00:31:38.929 | 70.00th=[28967], 80.00th=[31327], 90.00th=[33424], 95.00th=[35914], 00:31:38.929 | 99.00th=[40633], 99.50th=[42206], 99.90th=[44827], 99.95th=[45351], 00:31:38.929 | 99.99th=[45876] 00:31:38.929 bw ( KiB/s): min= 2176, max= 2608, per=4.17%, avg=2384.80, stdev=132.42, samples=20 00:31:38.929 iops : min= 544, max= 652, avg=596.20, stdev=33.11, samples=20 00:31:38.929 lat (msec) : 10=0.07%, 20=4.17%, 50=95.77% 00:31:38.929 cpu : usr=96.97%, sys=2.60%, ctx=21, majf=0, minf=59 00:31:38.929 IO depths : 1=0.9%, 2=1.8%, 4=9.4%, 8=75.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:31:38.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 complete : 0=0.0%, 4=90.3%, 8=4.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.929 issued rwts: total=5975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.929 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.929 filename1: (groupid=0, jobs=1): err= 0: pid=2259446: Fri Apr 26 09:05:54 2024 00:31:38.929 read: IOPS=620, BW=2483KiB/s (2543kB/s)(24.3MiB/10025msec) 00:31:38.929 slat (nsec): min=6206, max=81501, avg=16364.40, stdev=9999.87 00:31:38.929 clat (usec): min=4883, max=45058, avg=25659.16, stdev=4911.01 00:31:38.929 lat (usec): min=4915, max=45096, avg=25675.52, stdev=4912.25 00:31:38.929 clat percentiles (usec): 00:31:38.929 | 1.00th=[12256], 5.00th=[18220], 10.00th=[22152], 20.00th=[23462], 00:31:38.929 | 30.00th=[23725], 40.00th=[24249], 50.00th=[24511], 60.00th=[25035], 00:31:38.929 | 70.00th=[25560], 80.00th=[29754], 90.00th=[32375], 95.00th=[34866], 00:31:38.930 | 99.00th=[40109], 99.50th=[41157], 99.90th=[43779], 99.95th=[44827], 00:31:38.930 | 99.99th=[44827] 00:31:38.930 bw ( KiB/s): min= 2304, max= 2720, per=4.35%, avg=2485.60, stdev=102.55, samples=20 00:31:38.930 iops : min= 576, max= 680, avg=621.40, stdev=25.64, samples=20 00:31:38.930 lat (msec) : 10=0.67%, 20=5.40%, 50=93.93% 00:31:38.930 cpu : usr=96.92%, sys=2.66%, ctx=19, majf=0, minf=44 00:31:38.930 IO depths : 1=0.6%, 2=1.3%, 4=8.3%, 8=76.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:31:38.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 issued rwts: total=6224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.930 filename1: (groupid=0, jobs=1): err= 0: pid=2259447: Fri Apr 26 09:05:54 2024 00:31:38.930 read: IOPS=594, BW=2378KiB/s (2436kB/s)(23.3MiB/10045msec) 00:31:38.930 slat (nsec): min=6068, max=81839, avg=19135.62, stdev=12075.39 00:31:38.930 clat (usec): min=12426, max=50392, avg=26761.64, stdev=4861.53 00:31:38.930 lat (usec): min=12440, max=50399, avg=26780.78, stdev=4859.93 00:31:38.930 clat percentiles (usec): 00:31:38.930 | 1.00th=[14746], 5.00th=[21627], 10.00th=[22676], 20.00th=[23725], 00:31:38.930 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25035], 60.00th=[25822], 00:31:38.930 | 70.00th=[28705], 80.00th=[31327], 90.00th=[33424], 95.00th=[35914], 00:31:38.930 | 99.00th=[40109], 99.50th=[43779], 99.90th=[47973], 99.95th=[50070], 00:31:38.930 | 99.99th=[50594] 00:31:38.930 bw ( KiB/s): min= 2132, max= 2552, per=4.19%, avg=2398.11, stdev=116.03, samples=19 00:31:38.930 iops : min= 533, max= 638, avg=599.53, stdev=29.01, samples=19 00:31:38.930 lat (msec) : 20=3.42%, 50=96.57%, 100=0.02% 00:31:38.930 cpu : usr=96.89%, sys=2.69%, ctx=15, majf=0, minf=42 00:31:38.930 IO depths : 1=0.5%, 2=1.1%, 4=8.9%, 8=76.0%, 16=13.6%, 32=0.0%, >=64=0.0% 00:31:38.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 complete : 0=0.0%, 4=90.5%, 8=5.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 issued rwts: total=5973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.930 filename2: (groupid=0, jobs=1): err= 0: pid=2259448: Fri Apr 26 09:05:54 2024 00:31:38.930 read: IOPS=596, BW=2387KiB/s (2444kB/s)(23.3MiB/10011msec) 00:31:38.930 slat (nsec): min=6247, max=81050, avg=18041.35, stdev=10692.73 00:31:38.930 clat (usec): min=9795, max=47332, avg=26700.62, stdev=5012.50 00:31:38.930 lat (usec): min=9807, max=47346, avg=26718.66, stdev=5012.88 00:31:38.930 clat percentiles (usec): 00:31:38.930 | 1.00th=[14091], 5.00th=[21103], 10.00th=[22676], 20.00th=[23462], 00:31:38.930 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25035], 60.00th=[25560], 00:31:38.930 | 70.00th=[29230], 80.00th=[31327], 90.00th=[33817], 95.00th=[35390], 00:31:38.930 | 99.00th=[40633], 99.50th=[42206], 99.90th=[46400], 99.95th=[46924], 00:31:38.930 | 99.99th=[47449] 00:31:38.930 bw ( KiB/s): min= 2104, max= 2560, per=4.17%, avg=2387.20, stdev=127.11, samples=20 00:31:38.930 iops : min= 526, max= 640, avg=596.80, stdev=31.78, samples=20 00:31:38.930 lat (msec) : 10=0.05%, 20=4.39%, 50=95.56% 00:31:38.930 cpu : usr=97.45%, sys=2.11%, ctx=16, majf=0, minf=75 00:31:38.930 IO depths : 1=0.9%, 2=2.0%, 4=9.6%, 8=75.0%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:38.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 issued rwts: total=5974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.930 filename2: (groupid=0, jobs=1): err= 0: pid=2259449: Fri Apr 26 09:05:54 2024 00:31:38.930 read: IOPS=599, BW=2398KiB/s (2455kB/s)(23.4MiB/10003msec) 00:31:38.930 slat (nsec): min=5369, max=74907, avg=15116.90, stdev=10432.91 00:31:38.930 clat (usec): min=7487, max=61210, avg=26614.98, stdev=5023.98 00:31:38.930 lat (usec): min=7496, max=61232, avg=26630.10, stdev=5023.56 00:31:38.930 clat percentiles (usec): 00:31:38.930 | 1.00th=[14484], 5.00th=[21627], 10.00th=[22676], 20.00th=[23462], 00:31:38.930 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25822], 00:31:38.930 | 70.00th=[28181], 80.00th=[30802], 90.00th=[32900], 95.00th=[34866], 00:31:38.930 | 99.00th=[43254], 99.50th=[45351], 99.90th=[55837], 99.95th=[61080], 00:31:38.930 | 99.99th=[61080] 00:31:38.930 bw ( KiB/s): min= 2088, max= 2544, per=4.17%, avg=2382.89, stdev=116.42, samples=19 00:31:38.930 iops : min= 522, max= 636, avg=595.68, stdev=29.07, samples=19 00:31:38.930 lat (msec) : 10=0.37%, 20=3.00%, 50=96.36%, 100=0.27% 00:31:38.930 cpu : usr=97.38%, sys=2.18%, ctx=16, majf=0, minf=97 00:31:38.930 IO depths : 1=0.1%, 2=0.6%, 4=7.0%, 8=77.9%, 16=14.4%, 32=0.0%, >=64=0.0% 00:31:38.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 complete : 0=0.0%, 4=90.3%, 8=5.6%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 issued rwts: total=5996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.930 filename2: (groupid=0, jobs=1): err= 0: pid=2259450: Fri Apr 26 09:05:54 2024 00:31:38.930 read: IOPS=613, BW=2452KiB/s (2511kB/s)(24.0MiB/10008msec) 00:31:38.930 slat (nsec): min=5943, max=79789, avg=18455.15, stdev=11153.53 00:31:38.930 clat (usec): min=6834, max=52563, avg=25994.10, stdev=4317.35 00:31:38.930 lat (usec): min=6845, max=52579, avg=26012.55, stdev=4316.52 00:31:38.930 clat percentiles (usec): 00:31:38.930 | 1.00th=[15139], 5.00th=[21627], 10.00th=[22938], 20.00th=[23462], 00:31:38.930 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:31:38.930 | 70.00th=[25822], 80.00th=[28967], 90.00th=[32113], 95.00th=[33817], 00:31:38.930 | 99.00th=[40633], 99.50th=[44303], 99.90th=[50594], 99.95th=[52691], 00:31:38.930 | 99.99th=[52691] 00:31:38.930 bw ( KiB/s): min= 2272, max= 2664, per=4.27%, avg=2442.11, stdev=104.81, samples=19 00:31:38.930 iops : min= 568, max= 666, avg=610.53, stdev=26.20, samples=19 00:31:38.930 lat (msec) : 10=0.02%, 20=3.15%, 50=96.66%, 100=0.18% 00:31:38.930 cpu : usr=97.17%, sys=2.37%, ctx=18, majf=0, minf=64 00:31:38.930 IO depths : 1=0.2%, 2=0.7%, 4=7.1%, 8=78.5%, 16=13.5%, 32=0.0%, >=64=0.0% 00:31:38.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 issued rwts: total=6136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.930 filename2: (groupid=0, jobs=1): err= 0: pid=2259451: Fri Apr 26 09:05:54 2024 00:31:38.930 read: IOPS=588, BW=2352KiB/s (2409kB/s)(23.0MiB/10015msec) 00:31:38.930 slat (nsec): min=6217, max=87281, avg=29419.55, stdev=16122.27 00:31:38.930 clat (usec): min=4205, max=49774, avg=27029.12, stdev=5915.65 00:31:38.930 lat (usec): min=4218, max=49798, avg=27058.54, stdev=5916.62 00:31:38.930 clat percentiles (usec): 00:31:38.930 | 1.00th=[ 6456], 5.00th=[19792], 10.00th=[22414], 20.00th=[23462], 00:31:38.930 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25297], 60.00th=[26608], 00:31:38.930 | 70.00th=[30016], 80.00th=[31851], 90.00th=[33817], 95.00th=[36963], 00:31:38.930 | 99.00th=[44303], 99.50th=[47449], 99.90th=[49546], 99.95th=[49546], 00:31:38.930 | 99.99th=[49546] 00:31:38.930 bw ( KiB/s): min= 2144, max= 2768, per=4.12%, avg=2353.60, stdev=136.92, samples=20 00:31:38.930 iops : min= 536, max= 692, avg=588.40, stdev=34.23, samples=20 00:31:38.930 lat (msec) : 10=1.36%, 20=3.94%, 50=94.70% 00:31:38.930 cpu : usr=97.54%, sys=1.76%, ctx=169, majf=0, minf=63 00:31:38.930 IO depths : 1=0.2%, 2=0.6%, 4=6.9%, 8=79.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:31:38.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 issued rwts: total=5890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.930 filename2: (groupid=0, jobs=1): err= 0: pid=2259452: Fri Apr 26 09:05:54 2024 00:31:38.930 read: IOPS=587, BW=2348KiB/s (2405kB/s)(22.9MiB/10003msec) 00:31:38.930 slat (nsec): min=5195, max=74123, avg=18045.51, stdev=11327.59 00:31:38.930 clat (usec): min=6463, max=49797, avg=27152.78, stdev=5440.01 00:31:38.930 lat (usec): min=6470, max=49821, avg=27170.83, stdev=5438.92 00:31:38.930 clat percentiles (usec): 00:31:38.930 | 1.00th=[14222], 5.00th=[21103], 10.00th=[22676], 20.00th=[23725], 00:31:38.930 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25297], 60.00th=[26346], 00:31:38.930 | 70.00th=[29492], 80.00th=[31589], 90.00th=[34341], 95.00th=[36963], 00:31:38.930 | 99.00th=[43779], 99.50th=[46400], 99.90th=[48497], 99.95th=[49546], 00:31:38.930 | 99.99th=[49546] 00:31:38.930 bw ( KiB/s): min= 2123, max= 2512, per=4.08%, avg=2333.63, stdev=101.55, samples=19 00:31:38.930 iops : min= 530, max= 628, avg=583.37, stdev=25.47, samples=19 00:31:38.930 lat (msec) : 10=0.39%, 20=3.85%, 50=95.76% 00:31:38.930 cpu : usr=97.17%, sys=2.40%, ctx=13, majf=0, minf=74 00:31:38.930 IO depths : 1=0.2%, 2=0.8%, 4=7.8%, 8=78.1%, 16=13.1%, 32=0.0%, >=64=0.0% 00:31:38.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 complete : 0=0.0%, 4=90.0%, 8=5.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 issued rwts: total=5873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.930 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.930 filename2: (groupid=0, jobs=1): err= 0: pid=2259453: Fri Apr 26 09:05:54 2024 00:31:38.930 read: IOPS=586, BW=2347KiB/s (2403kB/s)(23.0MiB/10019msec) 00:31:38.930 slat (nsec): min=6187, max=80017, avg=19665.14, stdev=11796.99 00:31:38.930 clat (usec): min=11897, max=46190, avg=27132.85, stdev=4758.54 00:31:38.930 lat (usec): min=11911, max=46207, avg=27152.51, stdev=4757.78 00:31:38.930 clat percentiles (usec): 00:31:38.930 | 1.00th=[16319], 5.00th=[22152], 10.00th=[23200], 20.00th=[23725], 00:31:38.930 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25297], 60.00th=[26084], 00:31:38.930 | 70.00th=[29754], 80.00th=[31851], 90.00th=[33817], 95.00th=[35914], 00:31:38.930 | 99.00th=[40109], 99.50th=[41157], 99.90th=[45876], 99.95th=[46400], 00:31:38.930 | 99.99th=[46400] 00:31:38.930 bw ( KiB/s): min= 2128, max= 2528, per=4.10%, avg=2345.20, stdev=111.19, samples=20 00:31:38.930 iops : min= 532, max= 632, avg=586.30, stdev=27.80, samples=20 00:31:38.930 lat (msec) : 20=2.86%, 50=97.14% 00:31:38.930 cpu : usr=97.10%, sys=2.47%, ctx=16, majf=0, minf=56 00:31:38.930 IO depths : 1=0.8%, 2=1.8%, 4=10.1%, 8=75.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:31:38.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 complete : 0=0.0%, 4=90.4%, 8=4.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.930 issued rwts: total=5879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.931 filename2: (groupid=0, jobs=1): err= 0: pid=2259454: Fri Apr 26 09:05:54 2024 00:31:38.931 read: IOPS=602, BW=2412KiB/s (2470kB/s)(23.6MiB/10019msec) 00:31:38.931 slat (nsec): min=6181, max=83501, avg=17784.19, stdev=10891.52 00:31:38.931 clat (usec): min=8450, max=53603, avg=26420.12, stdev=4832.26 00:31:38.931 lat (usec): min=8467, max=53628, avg=26437.91, stdev=4832.81 00:31:38.931 clat percentiles (usec): 00:31:38.931 | 1.00th=[14877], 5.00th=[21103], 10.00th=[22938], 20.00th=[23462], 00:31:38.931 | 30.00th=[23987], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:31:38.931 | 70.00th=[27132], 80.00th=[30802], 90.00th=[33162], 95.00th=[34866], 00:31:38.931 | 99.00th=[40109], 99.50th=[43779], 99.90th=[47973], 99.95th=[49546], 00:31:38.931 | 99.99th=[53740] 00:31:38.931 bw ( KiB/s): min= 2272, max= 2560, per=4.22%, avg=2413.60, stdev=89.80, samples=20 00:31:38.931 iops : min= 568, max= 640, avg=603.40, stdev=22.45, samples=20 00:31:38.931 lat (msec) : 10=0.15%, 20=4.25%, 50=95.55%, 100=0.05% 00:31:38.931 cpu : usr=97.58%, sys=1.99%, ctx=15, majf=0, minf=56 00:31:38.931 IO depths : 1=0.5%, 2=1.1%, 4=8.4%, 8=77.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:31:38.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.931 complete : 0=0.0%, 4=89.8%, 8=5.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.931 issued rwts: total=6041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.931 filename2: (groupid=0, jobs=1): err= 0: pid=2259455: Fri Apr 26 09:05:54 2024 00:31:38.931 read: IOPS=583, BW=2335KiB/s (2391kB/s)(22.9MiB/10047msec) 00:31:38.931 slat (nsec): min=6094, max=72862, avg=13229.00, stdev=9222.07 00:31:38.931 clat (usec): min=12037, max=59976, avg=27326.35, stdev=5406.24 00:31:38.931 lat (usec): min=12045, max=59991, avg=27339.58, stdev=5405.91 00:31:38.931 clat percentiles (usec): 00:31:38.931 | 1.00th=[16319], 5.00th=[20841], 10.00th=[22676], 20.00th=[23725], 00:31:38.931 | 30.00th=[24249], 40.00th=[24773], 50.00th=[25560], 60.00th=[27132], 00:31:38.931 | 70.00th=[29754], 80.00th=[31589], 90.00th=[33817], 95.00th=[36439], 00:31:38.931 | 99.00th=[45351], 99.50th=[48497], 99.90th=[60031], 99.95th=[60031], 00:31:38.931 | 99.99th=[60031] 00:31:38.931 bw ( KiB/s): min= 2096, max= 2608, per=4.09%, avg=2340.40, stdev=146.26, samples=20 00:31:38.931 iops : min= 524, max= 652, avg=585.10, stdev=36.57, samples=20 00:31:38.931 lat (msec) : 20=4.09%, 50=95.43%, 100=0.48% 00:31:38.931 cpu : usr=97.28%, sys=2.23%, ctx=18, majf=0, minf=61 00:31:38.931 IO depths : 1=0.2%, 2=0.7%, 4=7.0%, 8=77.6%, 16=14.6%, 32=0.0%, >=64=0.0% 00:31:38.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.931 complete : 0=0.0%, 4=90.2%, 8=6.1%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.931 issued rwts: total=5865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.931 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:38.931 00:31:38.931 Run status group 0 (all jobs): 00:31:38.931 READ: bw=55.8MiB/s (58.6MB/s), 2099KiB/s-2637KiB/s (2149kB/s-2700kB/s), io=561MiB (588MB), run=10003-10047msec 00:31:38.931 09:05:54 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:38.931 09:05:54 -- target/dif.sh@43 -- # local sub 00:31:38.931 09:05:54 -- target/dif.sh@45 -- # for sub in "$@" 00:31:38.931 09:05:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:38.931 09:05:54 -- target/dif.sh@36 -- # local sub_id=0 00:31:38.931 09:05:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@45 -- # for sub in "$@" 00:31:38.931 09:05:54 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:38.931 09:05:54 -- target/dif.sh@36 -- # local sub_id=1 00:31:38.931 09:05:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@45 -- # for sub in "$@" 00:31:38.931 09:05:54 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:38.931 09:05:54 -- target/dif.sh@36 -- # local sub_id=2 00:31:38.931 09:05:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:38.931 09:05:54 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:38.931 09:05:54 -- target/dif.sh@115 -- # numjobs=2 00:31:38.931 09:05:54 -- target/dif.sh@115 -- # iodepth=8 00:31:38.931 09:05:54 -- target/dif.sh@115 -- # runtime=5 00:31:38.931 09:05:54 -- target/dif.sh@115 -- # files=1 00:31:38.931 09:05:54 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:38.931 09:05:54 -- target/dif.sh@28 -- # local sub 00:31:38.931 09:05:54 -- target/dif.sh@30 -- # for sub in "$@" 00:31:38.931 09:05:54 -- target/dif.sh@31 -- # create_subsystem 0 00:31:38.931 09:05:54 -- target/dif.sh@18 -- # local sub_id=0 00:31:38.931 09:05:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 bdev_null0 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 [2024-04-26 09:05:54.655865] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@30 -- # for sub in "$@" 00:31:38.931 09:05:54 -- target/dif.sh@31 -- # create_subsystem 1 00:31:38.931 09:05:54 -- target/dif.sh@18 -- # local sub_id=1 00:31:38.931 09:05:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 bdev_null1 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.931 09:05:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:38.931 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:38.931 09:05:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:38.931 09:05:54 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:38.931 09:05:54 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:38.931 09:05:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:38.931 09:05:54 -- nvmf/common.sh@521 -- # config=() 00:31:38.931 09:05:54 -- nvmf/common.sh@521 -- # local subsystem config 00:31:38.931 09:05:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.931 09:05:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:38.931 09:05:54 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.931 09:05:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:38.931 { 00:31:38.931 "params": { 00:31:38.931 "name": "Nvme$subsystem", 00:31:38.931 "trtype": "$TEST_TRANSPORT", 00:31:38.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.931 "adrfam": "ipv4", 00:31:38.931 "trsvcid": "$NVMF_PORT", 00:31:38.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.931 "hdgst": ${hdgst:-false}, 00:31:38.931 "ddgst": ${ddgst:-false} 00:31:38.931 }, 00:31:38.931 "method": "bdev_nvme_attach_controller" 00:31:38.931 } 00:31:38.931 EOF 00:31:38.931 )") 00:31:38.931 09:05:54 -- target/dif.sh@82 -- # gen_fio_conf 00:31:38.931 09:05:54 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:38.931 09:05:54 -- target/dif.sh@54 -- # local file 00:31:38.931 09:05:54 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:38.931 09:05:54 -- target/dif.sh@56 -- # cat 00:31:38.931 09:05:54 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:38.932 09:05:54 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.932 09:05:54 -- common/autotest_common.sh@1327 -- # shift 00:31:38.932 09:05:54 -- nvmf/common.sh@543 -- # cat 00:31:38.932 09:05:54 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:38.932 09:05:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:38.932 09:05:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.932 09:05:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:38.932 09:05:54 -- target/dif.sh@72 -- # (( file <= files )) 00:31:38.932 09:05:54 -- target/dif.sh@73 -- # cat 00:31:38.932 09:05:54 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:38.932 09:05:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:38.932 09:05:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:38.932 09:05:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:38.932 { 00:31:38.932 "params": { 00:31:38.932 "name": "Nvme$subsystem", 00:31:38.932 "trtype": "$TEST_TRANSPORT", 00:31:38.932 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.932 "adrfam": "ipv4", 00:31:38.932 "trsvcid": "$NVMF_PORT", 00:31:38.932 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.932 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.932 "hdgst": ${hdgst:-false}, 00:31:38.932 "ddgst": ${ddgst:-false} 00:31:38.932 }, 00:31:38.932 "method": "bdev_nvme_attach_controller" 00:31:38.932 } 00:31:38.932 EOF 00:31:38.932 )") 00:31:38.932 09:05:54 -- nvmf/common.sh@543 -- # cat 00:31:38.932 09:05:54 -- target/dif.sh@72 -- # (( file++ )) 00:31:38.932 09:05:54 -- target/dif.sh@72 -- # (( file <= files )) 00:31:38.932 09:05:54 -- nvmf/common.sh@545 -- # jq . 00:31:38.932 09:05:54 -- nvmf/common.sh@546 -- # IFS=, 00:31:38.932 09:05:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:38.932 "params": { 00:31:38.932 "name": "Nvme0", 00:31:38.932 "trtype": "tcp", 00:31:38.932 "traddr": "10.0.0.2", 00:31:38.932 "adrfam": "ipv4", 00:31:38.932 "trsvcid": "4420", 00:31:38.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:38.932 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:38.932 "hdgst": false, 00:31:38.932 "ddgst": false 00:31:38.932 }, 00:31:38.932 "method": "bdev_nvme_attach_controller" 00:31:38.932 },{ 00:31:38.932 "params": { 00:31:38.932 "name": "Nvme1", 00:31:38.932 "trtype": "tcp", 00:31:38.932 "traddr": "10.0.0.2", 00:31:38.932 "adrfam": "ipv4", 00:31:38.932 "trsvcid": "4420", 00:31:38.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:38.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:38.932 "hdgst": false, 00:31:38.932 "ddgst": false 00:31:38.932 }, 00:31:38.932 "method": "bdev_nvme_attach_controller" 00:31:38.932 }' 00:31:38.932 09:05:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:38.932 09:05:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:38.932 09:05:54 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:38.932 09:05:54 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.932 09:05:54 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:38.932 09:05:54 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:38.932 09:05:54 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:38.932 09:05:54 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:38.932 09:05:54 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:38.932 09:05:54 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.932 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:38.932 ... 00:31:38.932 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:38.932 ... 00:31:38.932 fio-3.35 00:31:38.932 Starting 4 threads 00:31:38.932 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.200 00:31:44.200 filename0: (groupid=0, jobs=1): err= 0: pid=2261449: Fri Apr 26 09:06:00 2024 00:31:44.200 read: IOPS=2725, BW=21.3MiB/s (22.3MB/s)(107MiB/5002msec) 00:31:44.200 slat (nsec): min=5701, max=48429, avg=8420.00, stdev=3196.93 00:31:44.200 clat (usec): min=1629, max=7371, avg=2914.00, stdev=407.84 00:31:44.200 lat (usec): min=1636, max=7403, avg=2922.42, stdev=407.83 00:31:44.200 clat percentiles (usec): 00:31:44.200 | 1.00th=[ 2024], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2573], 00:31:44.200 | 30.00th=[ 2737], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:31:44.200 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3392], 95.00th=[ 3589], 00:31:44.200 | 99.00th=[ 3982], 99.50th=[ 4146], 99.90th=[ 4424], 99.95th=[ 6915], 00:31:44.200 | 99.99th=[ 7308] 00:31:44.200 bw ( KiB/s): min=21584, max=22032, per=25.41%, avg=21808.00, stdev=168.82, samples=10 00:31:44.200 iops : min= 2698, max= 2754, avg=2726.00, stdev=21.10, samples=10 00:31:44.200 lat (msec) : 2=0.76%, 4=98.32%, 10=0.92% 00:31:44.200 cpu : usr=93.56%, sys=6.08%, ctx=7, majf=0, minf=0 00:31:44.200 IO depths : 1=0.1%, 2=0.9%, 4=66.0%, 8=33.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.200 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.200 issued rwts: total=13633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.200 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:44.200 filename0: (groupid=0, jobs=1): err= 0: pid=2261450: Fri Apr 26 09:06:00 2024 00:31:44.200 read: IOPS=2645, BW=20.7MiB/s (21.7MB/s)(103MiB/5002msec) 00:31:44.200 slat (nsec): min=5782, max=64761, avg=9174.20, stdev=4582.26 00:31:44.200 clat (usec): min=1640, max=46940, avg=3001.56, stdev=1149.71 00:31:44.200 lat (usec): min=1646, max=46959, avg=3010.74, stdev=1149.70 00:31:44.200 clat percentiles (usec): 00:31:44.200 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:31:44.200 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2999], 00:31:44.200 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3458], 95.00th=[ 3654], 00:31:44.200 | 99.00th=[ 4080], 99.50th=[ 4228], 99.90th=[ 4752], 99.95th=[46924], 00:31:44.200 | 99.99th=[46924] 00:31:44.200 bw ( KiB/s): min=19334, max=21728, per=24.65%, avg=21162.20, stdev=666.49, samples=10 00:31:44.200 iops : min= 2416, max= 2716, avg=2645.40, stdev=83.60, samples=10 00:31:44.200 lat (msec) : 2=0.59%, 4=98.16%, 10=1.19%, 50=0.06% 00:31:44.200 cpu : usr=93.42%, sys=5.96%, ctx=88, majf=0, minf=11 00:31:44.200 IO depths : 1=0.1%, 2=0.9%, 4=65.8%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.200 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.200 issued rwts: total=13231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.200 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:44.200 filename1: (groupid=0, jobs=1): err= 0: pid=2261451: Fri Apr 26 09:06:00 2024 00:31:44.200 read: IOPS=2707, BW=21.2MiB/s (22.2MB/s)(106MiB/5001msec) 00:31:44.200 slat (nsec): min=5688, max=48414, avg=8427.36, stdev=3300.37 00:31:44.200 clat (usec): min=1633, max=7134, avg=2933.60, stdev=408.91 00:31:44.200 lat (usec): min=1639, max=7164, avg=2942.03, stdev=408.88 00:31:44.200 clat percentiles (usec): 00:31:44.200 | 1.00th=[ 2024], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2606], 00:31:44.200 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:31:44.200 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3425], 95.00th=[ 3589], 00:31:44.200 | 99.00th=[ 3949], 99.50th=[ 4015], 99.90th=[ 4424], 99.95th=[ 6587], 00:31:44.200 | 99.99th=[ 6783] 00:31:44.200 bw ( KiB/s): min=21392, max=22016, per=25.22%, avg=21648.00, stdev=247.10, samples=9 00:31:44.200 iops : min= 2674, max= 2752, avg=2706.00, stdev=30.89, samples=9 00:31:44.200 lat (msec) : 2=0.75%, 4=98.55%, 10=0.69% 00:31:44.200 cpu : usr=93.26%, sys=6.38%, ctx=8, majf=0, minf=9 00:31:44.200 IO depths : 1=0.1%, 2=0.9%, 4=66.1%, 8=32.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.200 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.200 issued rwts: total=13541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.200 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:44.200 filename1: (groupid=0, jobs=1): err= 0: pid=2261452: Fri Apr 26 09:06:00 2024 00:31:44.200 read: IOPS=2651, BW=20.7MiB/s (21.7MB/s)(104MiB/5002msec) 00:31:44.200 slat (nsec): min=5728, max=48747, avg=8423.98, stdev=3344.13 00:31:44.200 clat (usec): min=1776, max=6980, avg=2996.42, stdev=406.93 00:31:44.200 lat (usec): min=1782, max=6998, avg=3004.85, stdev=406.87 00:31:44.200 clat percentiles (usec): 00:31:44.200 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2704], 00:31:44.200 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:31:44.200 | 70.00th=[ 3163], 80.00th=[ 3294], 90.00th=[ 3490], 95.00th=[ 3720], 00:31:44.200 | 99.00th=[ 4015], 99.50th=[ 4178], 99.90th=[ 4686], 99.95th=[ 6587], 00:31:44.200 | 99.99th=[ 6915] 00:31:44.200 bw ( KiB/s): min=20880, max=21392, per=24.71%, avg=21211.20, stdev=147.81, samples=10 00:31:44.200 iops : min= 2610, max= 2674, avg=2651.40, stdev=18.48, samples=10 00:31:44.200 lat (msec) : 2=0.44%, 4=98.42%, 10=1.13% 00:31:44.200 cpu : usr=93.74%, sys=5.90%, ctx=7, majf=0, minf=0 00:31:44.200 IO depths : 1=0.1%, 2=0.8%, 4=66.1%, 8=33.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.200 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.200 issued rwts: total=13262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.201 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:44.201 00:31:44.201 Run status group 0 (all jobs): 00:31:44.201 READ: bw=83.8MiB/s (87.9MB/s), 20.7MiB/s-21.3MiB/s (21.7MB/s-22.3MB/s), io=419MiB (440MB), run=5001-5002msec 00:31:44.201 09:06:01 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:44.201 09:06:01 -- target/dif.sh@43 -- # local sub 00:31:44.201 09:06:01 -- target/dif.sh@45 -- # for sub in "$@" 00:31:44.201 09:06:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:44.201 09:06:01 -- target/dif.sh@36 -- # local sub_id=0 00:31:44.201 09:06:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:44.201 09:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.201 09:06:01 -- common/autotest_common.sh@10 -- # set +x 00:31:44.201 09:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.201 09:06:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:44.201 09:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.201 09:06:01 -- common/autotest_common.sh@10 -- # set +x 00:31:44.201 09:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.201 09:06:01 -- target/dif.sh@45 -- # for sub in "$@" 00:31:44.201 09:06:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:44.201 09:06:01 -- target/dif.sh@36 -- # local sub_id=1 00:31:44.201 09:06:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:44.201 09:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.201 09:06:01 -- common/autotest_common.sh@10 -- # set +x 00:31:44.201 09:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.201 09:06:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:44.201 09:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.201 09:06:01 -- common/autotest_common.sh@10 -- # set +x 00:31:44.201 09:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.201 00:31:44.201 real 0m24.275s 00:31:44.201 user 4m53.876s 00:31:44.201 sys 0m9.302s 00:31:44.201 09:06:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:44.201 09:06:01 -- common/autotest_common.sh@10 -- # set +x 00:31:44.201 ************************************ 00:31:44.201 END TEST fio_dif_rand_params 00:31:44.201 ************************************ 00:31:44.201 09:06:01 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:44.201 09:06:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:44.201 09:06:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:44.201 09:06:01 -- common/autotest_common.sh@10 -- # set +x 00:31:44.201 ************************************ 00:31:44.201 START TEST fio_dif_digest 00:31:44.201 ************************************ 00:31:44.201 09:06:01 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:31:44.201 09:06:01 -- target/dif.sh@123 -- # local NULL_DIF 00:31:44.201 09:06:01 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:44.201 09:06:01 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:44.201 09:06:01 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:44.201 09:06:01 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:44.201 09:06:01 -- target/dif.sh@127 -- # numjobs=3 00:31:44.201 09:06:01 -- target/dif.sh@127 -- # iodepth=3 00:31:44.201 09:06:01 -- target/dif.sh@127 -- # runtime=10 00:31:44.201 09:06:01 -- target/dif.sh@128 -- # hdgst=true 00:31:44.201 09:06:01 -- target/dif.sh@128 -- # ddgst=true 00:31:44.201 09:06:01 -- target/dif.sh@130 -- # create_subsystems 0 00:31:44.201 09:06:01 -- target/dif.sh@28 -- # local sub 00:31:44.201 09:06:01 -- target/dif.sh@30 -- # for sub in "$@" 00:31:44.201 09:06:01 -- target/dif.sh@31 -- # create_subsystem 0 00:31:44.201 09:06:01 -- target/dif.sh@18 -- # local sub_id=0 00:31:44.201 09:06:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:44.201 09:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.201 09:06:01 -- common/autotest_common.sh@10 -- # set +x 00:31:44.201 bdev_null0 00:31:44.201 09:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.201 09:06:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:44.201 09:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.201 09:06:01 -- common/autotest_common.sh@10 -- # set +x 00:31:44.201 09:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.201 09:06:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:44.201 09:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.201 09:06:01 -- common/autotest_common.sh@10 -- # set +x 00:31:44.201 09:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.201 09:06:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:44.201 09:06:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:44.201 09:06:01 -- common/autotest_common.sh@10 -- # set +x 00:31:44.201 [2024-04-26 09:06:01.411816] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.201 09:06:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:44.201 09:06:01 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:44.201 09:06:01 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:44.201 09:06:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:44.201 09:06:01 -- nvmf/common.sh@521 -- # config=() 00:31:44.201 09:06:01 -- nvmf/common.sh@521 -- # local subsystem config 00:31:44.201 09:06:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.201 09:06:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:44.201 09:06:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:44.201 { 00:31:44.201 "params": { 00:31:44.201 "name": "Nvme$subsystem", 00:31:44.201 "trtype": "$TEST_TRANSPORT", 00:31:44.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.201 "adrfam": "ipv4", 00:31:44.201 "trsvcid": "$NVMF_PORT", 00:31:44.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.201 "hdgst": ${hdgst:-false}, 00:31:44.201 "ddgst": ${ddgst:-false} 00:31:44.201 }, 00:31:44.201 "method": "bdev_nvme_attach_controller" 00:31:44.201 } 00:31:44.201 EOF 00:31:44.201 )") 00:31:44.201 09:06:01 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.201 09:06:01 -- target/dif.sh@82 -- # gen_fio_conf 00:31:44.201 09:06:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:44.201 09:06:01 -- target/dif.sh@54 -- # local file 00:31:44.201 09:06:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:44.201 09:06:01 -- target/dif.sh@56 -- # cat 00:31:44.201 09:06:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:44.201 09:06:01 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.201 09:06:01 -- common/autotest_common.sh@1327 -- # shift 00:31:44.201 09:06:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:44.201 09:06:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.201 09:06:01 -- nvmf/common.sh@543 -- # cat 00:31:44.201 09:06:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.201 09:06:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:44.201 09:06:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:44.201 09:06:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:44.201 09:06:01 -- target/dif.sh@72 -- # (( file <= files )) 00:31:44.201 09:06:01 -- nvmf/common.sh@545 -- # jq . 00:31:44.201 09:06:01 -- nvmf/common.sh@546 -- # IFS=, 00:31:44.201 09:06:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:44.201 "params": { 00:31:44.201 "name": "Nvme0", 00:31:44.201 "trtype": "tcp", 00:31:44.201 "traddr": "10.0.0.2", 00:31:44.201 "adrfam": "ipv4", 00:31:44.201 "trsvcid": "4420", 00:31:44.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:44.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:44.201 "hdgst": true, 00:31:44.201 "ddgst": true 00:31:44.201 }, 00:31:44.201 "method": "bdev_nvme_attach_controller" 00:31:44.201 }' 00:31:44.494 09:06:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:44.494 09:06:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:44.494 09:06:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.494 09:06:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.494 09:06:01 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:44.494 09:06:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:44.494 09:06:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:44.494 09:06:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:44.494 09:06:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:44.494 09:06:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.758 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:44.758 ... 00:31:44.758 fio-3.35 00:31:44.758 Starting 3 threads 00:31:44.758 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.949 00:31:56.949 filename0: (groupid=0, jobs=1): err= 0: pid=2262805: Fri Apr 26 09:06:12 2024 00:31:56.949 read: IOPS=182, BW=22.9MiB/s (24.0MB/s)(230MiB/10047msec) 00:31:56.949 slat (nsec): min=3046, max=30924, avg=10714.15, stdev=1862.60 00:31:56.949 clat (usec): min=5409, max=98381, avg=16362.65, stdev=14036.85 00:31:56.949 lat (usec): min=5415, max=98391, avg=16373.37, stdev=14036.85 00:31:56.949 clat percentiles (usec): 00:31:56.949 | 1.00th=[ 6783], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10290], 00:31:56.949 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11994], 00:31:56.949 | 70.00th=[12649], 80.00th=[14222], 90.00th=[51119], 95.00th=[54789], 00:31:56.949 | 99.00th=[57410], 99.50th=[58983], 99.90th=[96994], 99.95th=[98042], 00:31:56.949 | 99.99th=[98042] 00:31:56.949 bw ( KiB/s): min=17920, max=31744, per=24.02%, avg=23488.00, stdev=4136.12, samples=20 00:31:56.949 iops : min= 140, max= 248, avg=183.50, stdev=32.31, samples=20 00:31:56.949 lat (msec) : 10=14.42%, 20=74.48%, 50=0.54%, 100=10.55% 00:31:56.949 cpu : usr=91.72%, sys=7.90%, ctx=14, majf=0, minf=62 00:31:56.949 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.949 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.949 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:56.949 filename0: (groupid=0, jobs=1): err= 0: pid=2262806: Fri Apr 26 09:06:12 2024 00:31:56.949 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(370MiB/10046msec) 00:31:56.949 slat (nsec): min=4021, max=27776, avg=10260.77, stdev=1889.89 00:31:56.949 clat (usec): min=5124, max=63203, avg=10155.04, stdev=5515.49 00:31:56.949 lat (usec): min=5131, max=63214, avg=10165.30, stdev=5515.65 00:31:56.949 clat percentiles (usec): 00:31:56.949 | 1.00th=[ 5604], 5.00th=[ 6915], 10.00th=[ 7439], 20.00th=[ 7963], 00:31:56.949 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10028], 00:31:56.949 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11600], 95.00th=[12518], 00:31:56.949 | 99.00th=[52691], 99.50th=[54789], 99.90th=[63177], 99.95th=[63177], 00:31:56.949 | 99.99th=[63177] 00:31:56.949 bw ( KiB/s): min=26624, max=44544, per=38.71%, avg=37852.35, stdev=5199.37, samples=20 00:31:56.949 iops : min= 208, max= 348, avg=295.70, stdev=40.66, samples=20 00:31:56.949 lat (msec) : 10=57.53%, 20=41.08%, 50=0.10%, 100=1.28% 00:31:56.949 cpu : usr=90.73%, sys=8.86%, ctx=15, majf=0, minf=142 00:31:56.949 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.949 issued rwts: total=2960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.949 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:56.949 filename0: (groupid=0, jobs=1): err= 0: pid=2262807: Fri Apr 26 09:06:12 2024 00:31:56.949 read: IOPS=286, BW=35.8MiB/s (37.6MB/s)(360MiB/10045msec) 00:31:56.949 slat (nsec): min=6006, max=23585, avg=10207.45, stdev=1858.99 00:31:56.949 clat (usec): min=5188, max=65490, avg=10443.22, stdev=5676.18 00:31:56.949 lat (usec): min=5195, max=65514, avg=10453.43, stdev=5676.40 00:31:56.949 clat percentiles (usec): 00:31:56.949 | 1.00th=[ 5997], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8094], 00:31:56.949 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10421], 00:31:56.949 | 70.00th=[10814], 80.00th=[11338], 90.00th=[11994], 95.00th=[12911], 00:31:56.949 | 99.00th=[53740], 99.50th=[55837], 99.90th=[64226], 99.95th=[64226], 00:31:56.949 | 99.99th=[65274] 00:31:56.949 bw ( KiB/s): min=29184, max=44032, per=37.64%, avg=36812.80, stdev=4268.86, samples=20 00:31:56.949 iops : min= 228, max= 344, avg=287.60, stdev=33.35, samples=20 00:31:56.949 lat (msec) : 10=51.11%, 20=47.46%, 50=0.03%, 100=1.39% 00:31:56.949 cpu : usr=91.01%, sys=8.58%, ctx=16, majf=0, minf=116 00:31:56.949 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:56.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.949 issued rwts: total=2878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.949 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:56.949 00:31:56.949 Run status group 0 (all jobs): 00:31:56.949 READ: bw=95.5MiB/s (100MB/s), 22.9MiB/s-36.8MiB/s (24.0MB/s-38.6MB/s), io=960MiB (1006MB), run=10045-10047msec 00:31:56.949 09:06:12 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:56.949 09:06:12 -- target/dif.sh@43 -- # local sub 00:31:56.949 09:06:12 -- target/dif.sh@45 -- # for sub in "$@" 00:31:56.949 09:06:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:56.949 09:06:12 -- target/dif.sh@36 -- # local sub_id=0 00:31:56.949 09:06:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:56.949 09:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.949 09:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:56.949 09:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.949 09:06:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:56.949 09:06:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:56.949 09:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:56.949 09:06:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:56.949 00:31:56.949 real 0m11.248s 00:31:56.949 user 0m36.450s 00:31:56.949 sys 0m2.882s 00:31:56.949 09:06:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:56.949 09:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:56.949 ************************************ 00:31:56.949 END TEST fio_dif_digest 00:31:56.949 ************************************ 00:31:56.949 09:06:12 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:56.949 09:06:12 -- target/dif.sh@147 -- # nvmftestfini 00:31:56.949 09:06:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:56.949 09:06:12 -- nvmf/common.sh@117 -- # sync 00:31:56.949 09:06:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:56.949 09:06:12 -- nvmf/common.sh@120 -- # set +e 00:31:56.949 09:06:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:56.949 09:06:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:56.949 rmmod nvme_tcp 00:31:56.949 rmmod nvme_fabrics 00:31:56.949 rmmod nvme_keyring 00:31:56.949 09:06:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:56.949 09:06:12 -- nvmf/common.sh@124 -- # set -e 00:31:56.949 09:06:12 -- nvmf/common.sh@125 -- # return 0 00:31:56.949 09:06:12 -- nvmf/common.sh@478 -- # '[' -n 2253548 ']' 00:31:56.949 09:06:12 -- nvmf/common.sh@479 -- # killprocess 2253548 00:31:56.949 09:06:12 -- common/autotest_common.sh@936 -- # '[' -z 2253548 ']' 00:31:56.949 09:06:12 -- common/autotest_common.sh@940 -- # kill -0 2253548 00:31:56.949 09:06:12 -- common/autotest_common.sh@941 -- # uname 00:31:56.949 09:06:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:56.949 09:06:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2253548 00:31:56.949 09:06:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:56.949 09:06:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:56.949 09:06:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2253548' 00:31:56.949 killing process with pid 2253548 00:31:56.949 09:06:12 -- common/autotest_common.sh@955 -- # kill 2253548 00:31:56.949 09:06:12 -- common/autotest_common.sh@960 -- # wait 2253548 00:31:56.949 09:06:12 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:31:56.949 09:06:12 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:58.848 Waiting for block devices as requested 00:31:58.848 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:58.848 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:58.848 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:58.848 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:58.848 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:59.106 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:59.106 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:59.106 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:59.422 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:59.422 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:59.422 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:59.422 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:59.682 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:59.682 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:59.682 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:59.940 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:59.940 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:00.198 09:06:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:00.198 09:06:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:00.198 09:06:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:00.198 09:06:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:00.198 09:06:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.198 09:06:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:00.198 09:06:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.099 09:06:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:02.099 00:32:02.099 real 1m16.498s 00:32:02.099 user 7m15.448s 00:32:02.099 sys 0m30.202s 00:32:02.099 09:06:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:02.099 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:32:02.099 ************************************ 00:32:02.099 END TEST nvmf_dif 00:32:02.099 ************************************ 00:32:02.099 09:06:19 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:02.099 09:06:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:02.099 09:06:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:02.099 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:32:02.358 ************************************ 00:32:02.358 START TEST nvmf_abort_qd_sizes 00:32:02.358 ************************************ 00:32:02.358 09:06:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:02.358 * Looking for test storage... 00:32:02.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:02.358 09:06:19 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.358 09:06:19 -- nvmf/common.sh@7 -- # uname -s 00:32:02.358 09:06:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.358 09:06:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.358 09:06:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.358 09:06:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.358 09:06:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.358 09:06:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.358 09:06:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.358 09:06:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.358 09:06:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.358 09:06:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.616 09:06:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:02.616 09:06:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:02.616 09:06:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.616 09:06:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.616 09:06:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.616 09:06:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.616 09:06:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.616 09:06:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.616 09:06:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.616 09:06:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.616 09:06:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.616 09:06:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.617 09:06:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.617 09:06:19 -- paths/export.sh@5 -- # export PATH 00:32:02.617 09:06:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.617 09:06:19 -- nvmf/common.sh@47 -- # : 0 00:32:02.617 09:06:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:02.617 09:06:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:02.617 09:06:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.617 09:06:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.617 09:06:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.617 09:06:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:02.617 09:06:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:02.617 09:06:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:02.617 09:06:19 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:02.617 09:06:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:02.617 09:06:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.617 09:06:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:02.617 09:06:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:02.617 09:06:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:02.617 09:06:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.617 09:06:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:02.617 09:06:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.617 09:06:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:32:02.617 09:06:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:32:02.617 09:06:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:32:02.617 09:06:19 -- common/autotest_common.sh@10 -- # set +x 00:32:09.177 09:06:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:09.177 09:06:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:32:09.177 09:06:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:09.177 09:06:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:09.177 09:06:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:09.177 09:06:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:09.177 09:06:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:09.177 09:06:26 -- nvmf/common.sh@295 -- # net_devs=() 00:32:09.177 09:06:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:09.177 09:06:26 -- nvmf/common.sh@296 -- # e810=() 00:32:09.177 09:06:26 -- nvmf/common.sh@296 -- # local -ga e810 00:32:09.177 09:06:26 -- nvmf/common.sh@297 -- # x722=() 00:32:09.177 09:06:26 -- nvmf/common.sh@297 -- # local -ga x722 00:32:09.177 09:06:26 -- nvmf/common.sh@298 -- # mlx=() 00:32:09.177 09:06:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:32:09.177 09:06:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.177 09:06:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.177 09:06:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.177 09:06:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.177 09:06:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.177 09:06:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.177 09:06:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.177 09:06:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.177 09:06:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.177 09:06:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.177 09:06:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.177 09:06:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:09.177 09:06:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:09.177 09:06:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:09.177 09:06:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:09.177 09:06:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:09.177 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:09.177 09:06:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:09.177 09:06:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:09.177 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:09.177 09:06:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:09.177 09:06:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:09.177 09:06:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:09.177 09:06:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.177 09:06:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:09.177 09:06:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.177 09:06:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:09.177 Found net devices under 0000:af:00.0: cvl_0_0 00:32:09.177 09:06:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.178 09:06:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:09.178 09:06:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.178 09:06:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:32:09.178 09:06:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.178 09:06:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:09.178 Found net devices under 0000:af:00.1: cvl_0_1 00:32:09.178 09:06:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.178 09:06:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:32:09.178 09:06:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:32:09.178 09:06:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:32:09.178 09:06:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:32:09.178 09:06:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:32:09.178 09:06:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.178 09:06:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.178 09:06:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:09.178 09:06:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:09.178 09:06:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:09.178 09:06:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:09.178 09:06:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:09.178 09:06:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:09.178 09:06:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.178 09:06:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:09.178 09:06:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:09.178 09:06:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:09.178 09:06:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:09.178 09:06:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:09.178 09:06:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:09.178 09:06:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:09.178 09:06:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:09.178 09:06:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:09.178 09:06:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:09.178 09:06:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:09.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:32:09.178 00:32:09.178 --- 10.0.0.2 ping statistics --- 00:32:09.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.178 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:32:09.178 09:06:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:09.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:32:09.178 00:32:09.178 --- 10.0.0.1 ping statistics --- 00:32:09.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.178 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:32:09.178 09:06:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.178 09:06:26 -- nvmf/common.sh@411 -- # return 0 00:32:09.178 09:06:26 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:32:09.178 09:06:26 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:12.462 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:12.462 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:14.365 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:32:14.365 09:06:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:14.365 09:06:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:14.365 09:06:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:14.365 09:06:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:14.365 09:06:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:14.365 09:06:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:14.365 09:06:31 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:14.365 09:06:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:14.365 09:06:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:14.365 09:06:31 -- common/autotest_common.sh@10 -- # set +x 00:32:14.365 09:06:31 -- nvmf/common.sh@470 -- # nvmfpid=2271536 00:32:14.365 09:06:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:14.365 09:06:31 -- nvmf/common.sh@471 -- # waitforlisten 2271536 00:32:14.365 09:06:31 -- common/autotest_common.sh@817 -- # '[' -z 2271536 ']' 00:32:14.365 09:06:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.365 09:06:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:14.365 09:06:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.365 09:06:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:14.365 09:06:31 -- common/autotest_common.sh@10 -- # set +x 00:32:14.365 [2024-04-26 09:06:31.350620] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:32:14.365 [2024-04-26 09:06:31.350665] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:14.365 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.365 [2024-04-26 09:06:31.424834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:14.365 [2024-04-26 09:06:31.498221] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:14.365 [2024-04-26 09:06:31.498259] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:14.365 [2024-04-26 09:06:31.498269] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:14.365 [2024-04-26 09:06:31.498278] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:14.365 [2024-04-26 09:06:31.498285] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:14.365 [2024-04-26 09:06:31.498336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.365 [2024-04-26 09:06:31.498430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:14.365 [2024-04-26 09:06:31.498517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:14.365 [2024-04-26 09:06:31.498520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:14.930 09:06:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:14.930 09:06:32 -- common/autotest_common.sh@850 -- # return 0 00:32:14.930 09:06:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:14.930 09:06:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:14.930 09:06:32 -- common/autotest_common.sh@10 -- # set +x 00:32:15.189 09:06:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.189 09:06:32 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:15.189 09:06:32 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:15.189 09:06:32 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:15.189 09:06:32 -- scripts/common.sh@309 -- # local bdf bdfs 00:32:15.189 09:06:32 -- scripts/common.sh@310 -- # local nvmes 00:32:15.189 09:06:32 -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:32:15.189 09:06:32 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:15.189 09:06:32 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:15.189 09:06:32 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:32:15.189 09:06:32 -- scripts/common.sh@320 -- # uname -s 00:32:15.189 09:06:32 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:15.189 09:06:32 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:15.189 09:06:32 -- scripts/common.sh@325 -- # (( 1 )) 00:32:15.189 09:06:32 -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:32:15.189 09:06:32 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:15.189 09:06:32 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:32:15.189 09:06:32 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:15.189 09:06:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:15.189 09:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:15.189 09:06:32 -- common/autotest_common.sh@10 -- # set +x 00:32:15.189 ************************************ 00:32:15.189 START TEST spdk_target_abort 00:32:15.189 ************************************ 00:32:15.189 09:06:32 -- common/autotest_common.sh@1111 -- # spdk_target 00:32:15.189 09:06:32 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:15.189 09:06:32 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:32:15.189 09:06:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:15.189 09:06:32 -- common/autotest_common.sh@10 -- # set +x 00:32:18.486 spdk_targetn1 00:32:18.486 09:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:18.486 09:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:18.486 09:06:35 -- common/autotest_common.sh@10 -- # set +x 00:32:18.486 [2024-04-26 09:06:35.224947] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:18.486 09:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:18.486 09:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:18.486 09:06:35 -- common/autotest_common.sh@10 -- # set +x 00:32:18.486 09:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:18.486 09:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:18.486 09:06:35 -- common/autotest_common.sh@10 -- # set +x 00:32:18.486 09:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:18.486 09:06:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:18.486 09:06:35 -- common/autotest_common.sh@10 -- # set +x 00:32:18.486 [2024-04-26 09:06:35.265244] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.486 09:06:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:18.486 09:06:35 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:18.487 09:06:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:18.487 09:06:35 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:18.487 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.766 Initializing NVMe Controllers 00:32:21.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:21.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:21.766 Initialization complete. Launching workers. 00:32:21.766 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5124, failed: 0 00:32:21.766 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1655, failed to submit 3469 00:32:21.766 success 910, unsuccess 745, failed 0 00:32:21.766 09:06:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:21.766 09:06:38 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:21.766 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.039 Initializing NVMe Controllers 00:32:25.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:25.040 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:25.040 Initialization complete. Launching workers. 00:32:25.040 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8748, failed: 0 00:32:25.040 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1235, failed to submit 7513 00:32:25.040 success 311, unsuccess 924, failed 0 00:32:25.040 09:06:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:25.040 09:06:41 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:25.040 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.317 Initializing NVMe Controllers 00:32:28.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:28.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:28.317 Initialization complete. Launching workers. 00:32:28.317 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34205, failed: 0 00:32:28.317 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2783, failed to submit 31422 00:32:28.317 success 687, unsuccess 2096, failed 0 00:32:28.317 09:06:44 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:28.317 09:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:28.317 09:06:44 -- common/autotest_common.sh@10 -- # set +x 00:32:28.317 09:06:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:28.317 09:06:44 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:28.317 09:06:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:28.317 09:06:44 -- common/autotest_common.sh@10 -- # set +x 00:32:29.688 09:06:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:29.688 09:06:46 -- target/abort_qd_sizes.sh@61 -- # killprocess 2271536 00:32:29.688 09:06:46 -- common/autotest_common.sh@936 -- # '[' -z 2271536 ']' 00:32:29.688 09:06:46 -- common/autotest_common.sh@940 -- # kill -0 2271536 00:32:29.688 09:06:46 -- common/autotest_common.sh@941 -- # uname 00:32:29.688 09:06:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:29.688 09:06:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2271536 00:32:29.946 09:06:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:29.946 09:06:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:29.946 09:06:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2271536' 00:32:29.946 killing process with pid 2271536 00:32:29.946 09:06:46 -- common/autotest_common.sh@955 -- # kill 2271536 00:32:29.946 09:06:46 -- common/autotest_common.sh@960 -- # wait 2271536 00:32:29.946 00:32:29.946 real 0m14.770s 00:32:29.946 user 0m58.767s 00:32:29.946 sys 0m2.822s 00:32:29.946 09:06:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:29.946 09:06:47 -- common/autotest_common.sh@10 -- # set +x 00:32:29.946 ************************************ 00:32:29.946 END TEST spdk_target_abort 00:32:29.946 ************************************ 00:32:29.946 09:06:47 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:29.946 09:06:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:29.946 09:06:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:29.946 09:06:47 -- common/autotest_common.sh@10 -- # set +x 00:32:30.204 ************************************ 00:32:30.204 START TEST kernel_target_abort 00:32:30.204 ************************************ 00:32:30.204 09:06:47 -- common/autotest_common.sh@1111 -- # kernel_target 00:32:30.204 09:06:47 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:30.204 09:06:47 -- nvmf/common.sh@717 -- # local ip 00:32:30.204 09:06:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:30.204 09:06:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:30.204 09:06:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.204 09:06:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.204 09:06:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:30.204 09:06:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.204 09:06:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:30.204 09:06:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:30.204 09:06:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:30.205 09:06:47 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:30.205 09:06:47 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:30.205 09:06:47 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:32:30.205 09:06:47 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:30.205 09:06:47 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:30.205 09:06:47 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:30.205 09:06:47 -- nvmf/common.sh@628 -- # local block nvme 00:32:30.205 09:06:47 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:32:30.205 09:06:47 -- nvmf/common.sh@631 -- # modprobe nvmet 00:32:30.205 09:06:47 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:30.205 09:06:47 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:33.520 Waiting for block devices as requested 00:32:33.520 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:33.520 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:33.520 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:33.520 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:33.778 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:33.778 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:33.778 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:33.778 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:34.037 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:34.037 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:34.037 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:34.296 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:34.296 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:34.296 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:34.555 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:34.555 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:34.555 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:34.814 09:06:51 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:32:34.814 09:06:51 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:34.814 09:06:51 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:32:34.814 09:06:51 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:32:34.814 09:06:51 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:34.814 09:06:51 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:32:34.814 09:06:51 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:32:34.814 09:06:51 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:34.814 09:06:51 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:34.814 No valid GPT data, bailing 00:32:34.814 09:06:51 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:34.814 09:06:51 -- scripts/common.sh@391 -- # pt= 00:32:34.814 09:06:51 -- scripts/common.sh@392 -- # return 1 00:32:34.814 09:06:51 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:32:34.814 09:06:51 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:32:34.814 09:06:51 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:34.814 09:06:51 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:34.814 09:06:52 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:34.814 09:06:52 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:34.814 09:06:52 -- nvmf/common.sh@656 -- # echo 1 00:32:34.814 09:06:52 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:32:34.814 09:06:52 -- nvmf/common.sh@658 -- # echo 1 00:32:34.814 09:06:52 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:32:34.814 09:06:52 -- nvmf/common.sh@661 -- # echo tcp 00:32:34.814 09:06:52 -- nvmf/common.sh@662 -- # echo 4420 00:32:34.814 09:06:52 -- nvmf/common.sh@663 -- # echo ipv4 00:32:34.814 09:06:52 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:34.814 09:06:52 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:32:35.074 00:32:35.074 Discovery Log Number of Records 2, Generation counter 2 00:32:35.074 =====Discovery Log Entry 0====== 00:32:35.074 trtype: tcp 00:32:35.074 adrfam: ipv4 00:32:35.074 subtype: current discovery subsystem 00:32:35.074 treq: not specified, sq flow control disable supported 00:32:35.074 portid: 1 00:32:35.074 trsvcid: 4420 00:32:35.074 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:35.074 traddr: 10.0.0.1 00:32:35.074 eflags: none 00:32:35.074 sectype: none 00:32:35.074 =====Discovery Log Entry 1====== 00:32:35.074 trtype: tcp 00:32:35.074 adrfam: ipv4 00:32:35.074 subtype: nvme subsystem 00:32:35.074 treq: not specified, sq flow control disable supported 00:32:35.074 portid: 1 00:32:35.074 trsvcid: 4420 00:32:35.074 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:35.074 traddr: 10.0.0.1 00:32:35.074 eflags: none 00:32:35.074 sectype: none 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:35.074 09:06:52 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:35.074 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.359 Initializing NVMe Controllers 00:32:38.359 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:38.359 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:38.359 Initialization complete. Launching workers. 00:32:38.359 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 51237, failed: 0 00:32:38.359 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 51237, failed to submit 0 00:32:38.359 success 0, unsuccess 51237, failed 0 00:32:38.359 09:06:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:38.359 09:06:55 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:38.359 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.646 Initializing NVMe Controllers 00:32:41.646 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:41.646 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:41.646 Initialization complete. Launching workers. 00:32:41.646 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 97943, failed: 0 00:32:41.646 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24630, failed to submit 73313 00:32:41.646 success 0, unsuccess 24630, failed 0 00:32:41.646 09:06:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:41.646 09:06:58 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:41.646 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.176 Initializing NVMe Controllers 00:32:44.176 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:44.176 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:44.176 Initialization complete. Launching workers. 00:32:44.176 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93776, failed: 0 00:32:44.176 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23458, failed to submit 70318 00:32:44.176 success 0, unsuccess 23458, failed 0 00:32:44.176 09:07:01 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:44.176 09:07:01 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:44.176 09:07:01 -- nvmf/common.sh@675 -- # echo 0 00:32:44.176 09:07:01 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:44.177 09:07:01 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:44.177 09:07:01 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:44.177 09:07:01 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:44.177 09:07:01 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:32:44.177 09:07:01 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:32:44.435 09:07:01 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:47.756 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:47.756 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:49.132 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:32:49.390 00:32:49.390 real 0m19.044s 00:32:49.390 user 0m6.393s 00:32:49.390 sys 0m6.305s 00:32:49.390 09:07:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:49.390 09:07:06 -- common/autotest_common.sh@10 -- # set +x 00:32:49.390 ************************************ 00:32:49.390 END TEST kernel_target_abort 00:32:49.390 ************************************ 00:32:49.390 09:07:06 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:49.390 09:07:06 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:49.390 09:07:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:49.390 09:07:06 -- nvmf/common.sh@117 -- # sync 00:32:49.390 09:07:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:49.390 09:07:06 -- nvmf/common.sh@120 -- # set +e 00:32:49.390 09:07:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:49.390 09:07:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:49.390 rmmod nvme_tcp 00:32:49.390 rmmod nvme_fabrics 00:32:49.390 rmmod nvme_keyring 00:32:49.390 09:07:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:49.390 09:07:06 -- nvmf/common.sh@124 -- # set -e 00:32:49.390 09:07:06 -- nvmf/common.sh@125 -- # return 0 00:32:49.390 09:07:06 -- nvmf/common.sh@478 -- # '[' -n 2271536 ']' 00:32:49.390 09:07:06 -- nvmf/common.sh@479 -- # killprocess 2271536 00:32:49.390 09:07:06 -- common/autotest_common.sh@936 -- # '[' -z 2271536 ']' 00:32:49.390 09:07:06 -- common/autotest_common.sh@940 -- # kill -0 2271536 00:32:49.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2271536) - No such process 00:32:49.390 09:07:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2271536 is not found' 00:32:49.390 Process with pid 2271536 is not found 00:32:49.390 09:07:06 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:32:49.390 09:07:06 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:52.673 Waiting for block devices as requested 00:32:52.673 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:52.673 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:52.931 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:52.931 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:52.931 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:52.931 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:53.190 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:53.190 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:53.190 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:53.448 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:53.448 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:53.448 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:53.706 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:53.706 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:53.706 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:53.964 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:53.964 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:54.222 09:07:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:54.222 09:07:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:54.222 09:07:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:54.222 09:07:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:54.222 09:07:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.222 09:07:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:54.222 09:07:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.120 09:07:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:56.120 00:32:56.120 real 0m53.836s 00:32:56.120 user 1m9.928s 00:32:56.120 sys 0m19.648s 00:32:56.120 09:07:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:56.120 09:07:13 -- common/autotest_common.sh@10 -- # set +x 00:32:56.120 ************************************ 00:32:56.120 END TEST nvmf_abort_qd_sizes 00:32:56.120 ************************************ 00:32:56.377 09:07:13 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:56.377 09:07:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:56.377 09:07:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:56.377 09:07:13 -- common/autotest_common.sh@10 -- # set +x 00:32:56.377 ************************************ 00:32:56.377 START TEST keyring_file 00:32:56.377 ************************************ 00:32:56.377 09:07:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:56.635 * Looking for test storage... 00:32:56.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:56.635 09:07:13 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:56.635 09:07:13 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:56.635 09:07:13 -- nvmf/common.sh@7 -- # uname -s 00:32:56.635 09:07:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:56.635 09:07:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:56.635 09:07:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:56.635 09:07:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:56.635 09:07:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:56.635 09:07:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:56.636 09:07:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:56.636 09:07:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:56.636 09:07:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:56.636 09:07:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:56.636 09:07:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:56.636 09:07:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:56.636 09:07:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:56.636 09:07:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:56.636 09:07:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:56.636 09:07:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:56.636 09:07:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:56.636 09:07:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:56.636 09:07:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:56.636 09:07:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:56.636 09:07:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.636 09:07:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.636 09:07:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.636 09:07:13 -- paths/export.sh@5 -- # export PATH 00:32:56.636 09:07:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.636 09:07:13 -- nvmf/common.sh@47 -- # : 0 00:32:56.636 09:07:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:56.636 09:07:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:56.636 09:07:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:56.636 09:07:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:56.636 09:07:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:56.636 09:07:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:56.636 09:07:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:56.636 09:07:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:56.636 09:07:13 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:56.636 09:07:13 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:56.636 09:07:13 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:56.636 09:07:13 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:56.636 09:07:13 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:56.636 09:07:13 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:56.636 09:07:13 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:56.636 09:07:13 -- keyring/common.sh@15 -- # local name key digest path 00:32:56.636 09:07:13 -- keyring/common.sh@17 -- # name=key0 00:32:56.636 09:07:13 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:56.636 09:07:13 -- keyring/common.sh@17 -- # digest=0 00:32:56.636 09:07:13 -- keyring/common.sh@18 -- # mktemp 00:32:56.636 09:07:13 -- keyring/common.sh@18 -- # path=/tmp/tmp.of77IRvFuf 00:32:56.636 09:07:13 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:56.636 09:07:13 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:56.636 09:07:13 -- nvmf/common.sh@691 -- # local prefix key digest 00:32:56.636 09:07:13 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:32:56.636 09:07:13 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:32:56.636 09:07:13 -- nvmf/common.sh@693 -- # digest=0 00:32:56.636 09:07:13 -- nvmf/common.sh@694 -- # python - 00:32:56.636 09:07:13 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.of77IRvFuf 00:32:56.636 09:07:13 -- keyring/common.sh@23 -- # echo /tmp/tmp.of77IRvFuf 00:32:56.636 09:07:13 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.of77IRvFuf 00:32:56.636 09:07:13 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:56.636 09:07:13 -- keyring/common.sh@15 -- # local name key digest path 00:32:56.636 09:07:13 -- keyring/common.sh@17 -- # name=key1 00:32:56.636 09:07:13 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:56.636 09:07:13 -- keyring/common.sh@17 -- # digest=0 00:32:56.636 09:07:13 -- keyring/common.sh@18 -- # mktemp 00:32:56.636 09:07:13 -- keyring/common.sh@18 -- # path=/tmp/tmp.op7YcX44gT 00:32:56.636 09:07:13 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:56.636 09:07:13 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:56.636 09:07:13 -- nvmf/common.sh@691 -- # local prefix key digest 00:32:56.636 09:07:13 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:32:56.636 09:07:13 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:32:56.636 09:07:13 -- nvmf/common.sh@693 -- # digest=0 00:32:56.636 09:07:13 -- nvmf/common.sh@694 -- # python - 00:32:56.636 09:07:13 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.op7YcX44gT 00:32:56.636 09:07:13 -- keyring/common.sh@23 -- # echo /tmp/tmp.op7YcX44gT 00:32:56.636 09:07:13 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.op7YcX44gT 00:32:56.636 09:07:13 -- keyring/file.sh@30 -- # tgtpid=2281034 00:32:56.636 09:07:13 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:56.636 09:07:13 -- keyring/file.sh@32 -- # waitforlisten 2281034 00:32:56.636 09:07:13 -- common/autotest_common.sh@817 -- # '[' -z 2281034 ']' 00:32:56.636 09:07:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.636 09:07:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:56.636 09:07:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.636 09:07:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:56.636 09:07:13 -- common/autotest_common.sh@10 -- # set +x 00:32:56.636 [2024-04-26 09:07:13.836538] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:32:56.636 [2024-04-26 09:07:13.836589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281034 ] 00:32:56.636 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.894 [2024-04-26 09:07:13.906003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.894 [2024-04-26 09:07:13.977356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.459 09:07:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:57.459 09:07:14 -- common/autotest_common.sh@850 -- # return 0 00:32:57.459 09:07:14 -- keyring/file.sh@33 -- # rpc_cmd 00:32:57.459 09:07:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:57.459 09:07:14 -- common/autotest_common.sh@10 -- # set +x 00:32:57.459 [2024-04-26 09:07:14.634043] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.459 null0 00:32:57.459 [2024-04-26 09:07:14.666112] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:57.459 [2024-04-26 09:07:14.666400] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:57.459 [2024-04-26 09:07:14.674128] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:57.459 09:07:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:57.459 09:07:14 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:57.459 09:07:14 -- common/autotest_common.sh@638 -- # local es=0 00:32:57.459 09:07:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:57.459 09:07:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:32:57.459 09:07:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:57.460 09:07:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:32:57.460 09:07:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:32:57.460 09:07:14 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:57.460 09:07:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:57.460 09:07:14 -- common/autotest_common.sh@10 -- # set +x 00:32:57.460 [2024-04-26 09:07:14.690170] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:32:57.460 { 00:32:57.460 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:57.460 "secure_channel": false, 00:32:57.460 "listen_address": { 00:32:57.460 "trtype": "tcp", 00:32:57.460 "traddr": "127.0.0.1", 00:32:57.460 "trsvcid": "4420" 00:32:57.460 }, 00:32:57.460 "method": "nvmf_subsystem_add_listener", 00:32:57.460 "req_id": 1 00:32:57.460 } 00:32:57.460 Got JSON-RPC error response 00:32:57.460 response: 00:32:57.460 { 00:32:57.460 "code": -32602, 00:32:57.460 "message": "Invalid parameters" 00:32:57.460 } 00:32:57.460 09:07:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:32:57.460 09:07:14 -- common/autotest_common.sh@641 -- # es=1 00:32:57.460 09:07:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:32:57.460 09:07:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:32:57.460 09:07:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:32:57.460 09:07:14 -- keyring/file.sh@46 -- # bperfpid=2281148 00:32:57.460 09:07:14 -- keyring/file.sh@48 -- # waitforlisten 2281148 /var/tmp/bperf.sock 00:32:57.460 09:07:14 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:57.460 09:07:14 -- common/autotest_common.sh@817 -- # '[' -z 2281148 ']' 00:32:57.460 09:07:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.460 09:07:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:57.460 09:07:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.460 09:07:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:57.460 09:07:14 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 [2024-04-26 09:07:14.747144] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:32:57.718 [2024-04-26 09:07:14.747198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281148 ] 00:32:57.718 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.718 [2024-04-26 09:07:14.813292] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.718 [2024-04-26 09:07:14.882514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.689 09:07:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:58.689 09:07:15 -- common/autotest_common.sh@850 -- # return 0 00:32:58.689 09:07:15 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.of77IRvFuf 00:32:58.689 09:07:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.of77IRvFuf 00:32:58.689 09:07:15 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.op7YcX44gT 00:32:58.689 09:07:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.op7YcX44gT 00:32:58.689 09:07:15 -- keyring/file.sh@51 -- # get_key key0 00:32:58.689 09:07:15 -- keyring/file.sh@51 -- # jq -r .path 00:32:58.689 09:07:15 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.689 09:07:15 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.689 09:07:15 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:58.947 09:07:16 -- keyring/file.sh@51 -- # [[ /tmp/tmp.of77IRvFuf == \/\t\m\p\/\t\m\p\.\o\f\7\7\I\R\v\F\u\f ]] 00:32:58.947 09:07:16 -- keyring/file.sh@52 -- # jq -r .path 00:32:58.947 09:07:16 -- keyring/file.sh@52 -- # get_key key1 00:32:58.947 09:07:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:58.947 09:07:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:58.947 09:07:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:59.204 09:07:16 -- keyring/file.sh@52 -- # [[ /tmp/tmp.op7YcX44gT == \/\t\m\p\/\t\m\p\.\o\p\7\Y\c\X\4\4\g\T ]] 00:32:59.204 09:07:16 -- keyring/file.sh@53 -- # get_refcnt key0 00:32:59.204 09:07:16 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:59.204 09:07:16 -- keyring/common.sh@12 -- # get_key key0 00:32:59.204 09:07:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:59.204 09:07:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.204 09:07:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:59.204 09:07:16 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:59.204 09:07:16 -- keyring/file.sh@54 -- # get_refcnt key1 00:32:59.204 09:07:16 -- keyring/common.sh@12 -- # get_key key1 00:32:59.204 09:07:16 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:59.204 09:07:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:59.204 09:07:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.204 09:07:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:59.462 09:07:16 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:59.462 09:07:16 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:59.462 09:07:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:59.720 [2024-04-26 09:07:16.735799] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:59.720 nvme0n1 00:32:59.720 09:07:16 -- keyring/file.sh@59 -- # get_refcnt key0 00:32:59.720 09:07:16 -- keyring/common.sh@12 -- # get_key key0 00:32:59.720 09:07:16 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:59.720 09:07:16 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:59.720 09:07:16 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.720 09:07:16 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:59.977 09:07:16 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:59.977 09:07:16 -- keyring/file.sh@60 -- # get_refcnt key1 00:32:59.977 09:07:16 -- keyring/common.sh@12 -- # get_key key1 00:32:59.977 09:07:17 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:59.977 09:07:17 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:59.978 09:07:17 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:59.978 09:07:17 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.978 09:07:17 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:59.978 09:07:17 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:00.235 Running I/O for 1 seconds... 00:33:01.167 00:33:01.167 Latency(us) 00:33:01.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.167 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:01.167 nvme0n1 : 1.03 5409.59 21.13 0.00 0.00 23471.67 8074.04 140928.61 00:33:01.167 =================================================================================================================== 00:33:01.167 Total : 5409.59 21.13 0.00 0.00 23471.67 8074.04 140928.61 00:33:01.167 0 00:33:01.167 09:07:18 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:01.167 09:07:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:01.424 09:07:18 -- keyring/file.sh@65 -- # get_refcnt key0 00:33:01.424 09:07:18 -- keyring/common.sh@12 -- # get_key key0 00:33:01.424 09:07:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:01.424 09:07:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:01.424 09:07:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:01.424 09:07:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.424 09:07:18 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:01.424 09:07:18 -- keyring/file.sh@66 -- # get_refcnt key1 00:33:01.424 09:07:18 -- keyring/common.sh@12 -- # get_key key1 00:33:01.681 09:07:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:01.681 09:07:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:01.681 09:07:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:01.681 09:07:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:01.681 09:07:18 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:01.681 09:07:18 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:01.681 09:07:18 -- common/autotest_common.sh@638 -- # local es=0 00:33:01.681 09:07:18 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:01.681 09:07:18 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:01.681 09:07:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:01.681 09:07:18 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:01.681 09:07:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:01.681 09:07:18 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:01.681 09:07:18 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:01.939 [2024-04-26 09:07:19.014015] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:01.939 [2024-04-26 09:07:19.014906] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb16870 (107): Transport endpoint is not connected 00:33:01.939 [2024-04-26 09:07:19.015900] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb16870 (9): Bad file descriptor 00:33:01.939 [2024-04-26 09:07:19.016900] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:01.939 [2024-04-26 09:07:19.016912] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:01.939 [2024-04-26 09:07:19.016920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:01.939 request: 00:33:01.939 { 00:33:01.939 "name": "nvme0", 00:33:01.939 "trtype": "tcp", 00:33:01.939 "traddr": "127.0.0.1", 00:33:01.939 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:01.939 "adrfam": "ipv4", 00:33:01.939 "trsvcid": "4420", 00:33:01.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:01.939 "psk": "key1", 00:33:01.939 "method": "bdev_nvme_attach_controller", 00:33:01.939 "req_id": 1 00:33:01.939 } 00:33:01.939 Got JSON-RPC error response 00:33:01.939 response: 00:33:01.939 { 00:33:01.939 "code": -32602, 00:33:01.939 "message": "Invalid parameters" 00:33:01.939 } 00:33:01.939 09:07:19 -- common/autotest_common.sh@641 -- # es=1 00:33:01.939 09:07:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:01.939 09:07:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:01.939 09:07:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:01.939 09:07:19 -- keyring/file.sh@71 -- # get_refcnt key0 00:33:01.939 09:07:19 -- keyring/common.sh@12 -- # get_key key0 00:33:01.939 09:07:19 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:01.939 09:07:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:01.939 09:07:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:01.939 09:07:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.197 09:07:19 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:02.197 09:07:19 -- keyring/file.sh@72 -- # get_refcnt key1 00:33:02.197 09:07:19 -- keyring/common.sh@12 -- # get_key key1 00:33:02.197 09:07:19 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:02.197 09:07:19 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:02.197 09:07:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.197 09:07:19 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:02.197 09:07:19 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:02.197 09:07:19 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:02.197 09:07:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:02.455 09:07:19 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:02.455 09:07:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:02.713 09:07:19 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:02.713 09:07:19 -- keyring/file.sh@77 -- # jq length 00:33:02.713 09:07:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:02.713 09:07:19 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:02.713 09:07:19 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.of77IRvFuf 00:33:02.713 09:07:19 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.of77IRvFuf 00:33:02.713 09:07:19 -- common/autotest_common.sh@638 -- # local es=0 00:33:02.713 09:07:19 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.of77IRvFuf 00:33:02.713 09:07:19 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:02.713 09:07:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:02.713 09:07:19 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:02.713 09:07:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:02.713 09:07:19 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.of77IRvFuf 00:33:02.713 09:07:19 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.of77IRvFuf 00:33:02.970 [2024-04-26 09:07:20.066916] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.of77IRvFuf': 0100660 00:33:02.970 [2024-04-26 09:07:20.066948] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:02.970 request: 00:33:02.970 { 00:33:02.970 "name": "key0", 00:33:02.970 "path": "/tmp/tmp.of77IRvFuf", 00:33:02.970 "method": "keyring_file_add_key", 00:33:02.970 "req_id": 1 00:33:02.970 } 00:33:02.970 Got JSON-RPC error response 00:33:02.970 response: 00:33:02.970 { 00:33:02.970 "code": -1, 00:33:02.970 "message": "Operation not permitted" 00:33:02.970 } 00:33:02.970 09:07:20 -- common/autotest_common.sh@641 -- # es=1 00:33:02.970 09:07:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:02.970 09:07:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:02.970 09:07:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:02.970 09:07:20 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.of77IRvFuf 00:33:02.970 09:07:20 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.of77IRvFuf 00:33:02.970 09:07:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.of77IRvFuf 00:33:03.227 09:07:20 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.of77IRvFuf 00:33:03.227 09:07:20 -- keyring/file.sh@88 -- # get_refcnt key0 00:33:03.227 09:07:20 -- keyring/common.sh@12 -- # get_key key0 00:33:03.227 09:07:20 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:03.227 09:07:20 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:03.227 09:07:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:03.227 09:07:20 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:03.227 09:07:20 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:03.227 09:07:20 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:03.227 09:07:20 -- common/autotest_common.sh@638 -- # local es=0 00:33:03.227 09:07:20 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:03.227 09:07:20 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:03.227 09:07:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:03.227 09:07:20 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:03.227 09:07:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:03.227 09:07:20 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:03.227 09:07:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:03.485 [2024-04-26 09:07:20.604302] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.of77IRvFuf': No such file or directory 00:33:03.485 [2024-04-26 09:07:20.604328] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:03.485 [2024-04-26 09:07:20.604351] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:03.485 [2024-04-26 09:07:20.604360] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:03.485 [2024-04-26 09:07:20.604368] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:03.485 request: 00:33:03.485 { 00:33:03.485 "name": "nvme0", 00:33:03.485 "trtype": "tcp", 00:33:03.485 "traddr": "127.0.0.1", 00:33:03.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:03.485 "adrfam": "ipv4", 00:33:03.485 "trsvcid": "4420", 00:33:03.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:03.485 "psk": "key0", 00:33:03.485 "method": "bdev_nvme_attach_controller", 00:33:03.485 "req_id": 1 00:33:03.485 } 00:33:03.485 Got JSON-RPC error response 00:33:03.485 response: 00:33:03.485 { 00:33:03.485 "code": -19, 00:33:03.485 "message": "No such device" 00:33:03.485 } 00:33:03.485 09:07:20 -- common/autotest_common.sh@641 -- # es=1 00:33:03.485 09:07:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:03.485 09:07:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:03.485 09:07:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:03.485 09:07:20 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:03.485 09:07:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:03.742 09:07:20 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:03.742 09:07:20 -- keyring/common.sh@15 -- # local name key digest path 00:33:03.742 09:07:20 -- keyring/common.sh@17 -- # name=key0 00:33:03.742 09:07:20 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:03.742 09:07:20 -- keyring/common.sh@17 -- # digest=0 00:33:03.742 09:07:20 -- keyring/common.sh@18 -- # mktemp 00:33:03.742 09:07:20 -- keyring/common.sh@18 -- # path=/tmp/tmp.hgLQIJVZkQ 00:33:03.742 09:07:20 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:03.742 09:07:20 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:03.742 09:07:20 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:03.742 09:07:20 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:03.742 09:07:20 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:33:03.742 09:07:20 -- nvmf/common.sh@693 -- # digest=0 00:33:03.742 09:07:20 -- nvmf/common.sh@694 -- # python - 00:33:03.742 09:07:20 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hgLQIJVZkQ 00:33:03.742 09:07:20 -- keyring/common.sh@23 -- # echo /tmp/tmp.hgLQIJVZkQ 00:33:03.742 09:07:20 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.hgLQIJVZkQ 00:33:03.742 09:07:20 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hgLQIJVZkQ 00:33:03.742 09:07:20 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hgLQIJVZkQ 00:33:04.000 09:07:21 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.000 09:07:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:04.000 nvme0n1 00:33:04.000 09:07:21 -- keyring/file.sh@99 -- # get_refcnt key0 00:33:04.000 09:07:21 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:04.000 09:07:21 -- keyring/common.sh@12 -- # get_key key0 00:33:04.000 09:07:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:04.000 09:07:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:04.000 09:07:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:04.258 09:07:21 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:04.258 09:07:21 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:04.258 09:07:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:04.515 09:07:21 -- keyring/file.sh@101 -- # get_key key0 00:33:04.515 09:07:21 -- keyring/file.sh@101 -- # jq -r .removed 00:33:04.515 09:07:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:04.515 09:07:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:04.515 09:07:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:04.773 09:07:21 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:04.773 09:07:21 -- keyring/file.sh@102 -- # get_refcnt key0 00:33:04.773 09:07:21 -- keyring/common.sh@12 -- # get_key key0 00:33:04.773 09:07:21 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:04.773 09:07:21 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:04.773 09:07:21 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:04.773 09:07:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:04.773 09:07:21 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:04.773 09:07:21 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:04.773 09:07:21 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:05.030 09:07:22 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:05.030 09:07:22 -- keyring/file.sh@104 -- # jq length 00:33:05.030 09:07:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:05.288 09:07:22 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:05.288 09:07:22 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hgLQIJVZkQ 00:33:05.288 09:07:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hgLQIJVZkQ 00:33:05.288 09:07:22 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.op7YcX44gT 00:33:05.288 09:07:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.op7YcX44gT 00:33:05.546 09:07:22 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:05.546 09:07:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:05.804 nvme0n1 00:33:05.804 09:07:22 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:05.804 09:07:22 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:06.062 09:07:23 -- keyring/file.sh@112 -- # config='{ 00:33:06.062 "subsystems": [ 00:33:06.062 { 00:33:06.062 "subsystem": "keyring", 00:33:06.062 "config": [ 00:33:06.062 { 00:33:06.062 "method": "keyring_file_add_key", 00:33:06.062 "params": { 00:33:06.062 "name": "key0", 00:33:06.062 "path": "/tmp/tmp.hgLQIJVZkQ" 00:33:06.062 } 00:33:06.062 }, 00:33:06.062 { 00:33:06.062 "method": "keyring_file_add_key", 00:33:06.062 "params": { 00:33:06.062 "name": "key1", 00:33:06.062 "path": "/tmp/tmp.op7YcX44gT" 00:33:06.062 } 00:33:06.062 } 00:33:06.062 ] 00:33:06.062 }, 00:33:06.062 { 00:33:06.062 "subsystem": "iobuf", 00:33:06.062 "config": [ 00:33:06.062 { 00:33:06.062 "method": "iobuf_set_options", 00:33:06.062 "params": { 00:33:06.062 "small_pool_count": 8192, 00:33:06.062 "large_pool_count": 1024, 00:33:06.062 "small_bufsize": 8192, 00:33:06.062 "large_bufsize": 135168 00:33:06.062 } 00:33:06.062 } 00:33:06.062 ] 00:33:06.062 }, 00:33:06.062 { 00:33:06.063 "subsystem": "sock", 00:33:06.063 "config": [ 00:33:06.063 { 00:33:06.063 "method": "sock_impl_set_options", 00:33:06.063 "params": { 00:33:06.063 "impl_name": "posix", 00:33:06.063 "recv_buf_size": 2097152, 00:33:06.063 "send_buf_size": 2097152, 00:33:06.063 "enable_recv_pipe": true, 00:33:06.063 "enable_quickack": false, 00:33:06.063 "enable_placement_id": 0, 00:33:06.063 "enable_zerocopy_send_server": true, 00:33:06.063 "enable_zerocopy_send_client": false, 00:33:06.063 "zerocopy_threshold": 0, 00:33:06.063 "tls_version": 0, 00:33:06.063 "enable_ktls": false 00:33:06.063 } 00:33:06.063 }, 00:33:06.063 { 00:33:06.063 "method": "sock_impl_set_options", 00:33:06.063 "params": { 00:33:06.063 "impl_name": "ssl", 00:33:06.063 "recv_buf_size": 4096, 00:33:06.063 "send_buf_size": 4096, 00:33:06.063 "enable_recv_pipe": true, 00:33:06.063 "enable_quickack": false, 00:33:06.063 "enable_placement_id": 0, 00:33:06.063 "enable_zerocopy_send_server": true, 00:33:06.063 "enable_zerocopy_send_client": false, 00:33:06.063 "zerocopy_threshold": 0, 00:33:06.063 "tls_version": 0, 00:33:06.063 "enable_ktls": false 00:33:06.063 } 00:33:06.063 } 00:33:06.063 ] 00:33:06.063 }, 00:33:06.063 { 00:33:06.063 "subsystem": "vmd", 00:33:06.063 "config": [] 00:33:06.063 }, 00:33:06.063 { 00:33:06.063 "subsystem": "accel", 00:33:06.063 "config": [ 00:33:06.063 { 00:33:06.063 "method": "accel_set_options", 00:33:06.063 "params": { 00:33:06.063 "small_cache_size": 128, 00:33:06.063 "large_cache_size": 16, 00:33:06.063 "task_count": 2048, 00:33:06.063 "sequence_count": 2048, 00:33:06.063 "buf_count": 2048 00:33:06.063 } 00:33:06.063 } 00:33:06.063 ] 00:33:06.063 }, 00:33:06.063 { 00:33:06.063 "subsystem": "bdev", 00:33:06.063 "config": [ 00:33:06.063 { 00:33:06.063 "method": "bdev_set_options", 00:33:06.063 "params": { 00:33:06.063 "bdev_io_pool_size": 65535, 00:33:06.063 "bdev_io_cache_size": 256, 00:33:06.063 "bdev_auto_examine": true, 00:33:06.063 "iobuf_small_cache_size": 128, 00:33:06.063 "iobuf_large_cache_size": 16 00:33:06.063 } 00:33:06.063 }, 00:33:06.063 { 00:33:06.063 "method": "bdev_raid_set_options", 00:33:06.063 "params": { 00:33:06.063 "process_window_size_kb": 1024 00:33:06.063 } 00:33:06.063 }, 00:33:06.063 { 00:33:06.063 "method": "bdev_iscsi_set_options", 00:33:06.063 "params": { 00:33:06.063 "timeout_sec": 30 00:33:06.063 } 00:33:06.063 }, 00:33:06.063 { 00:33:06.063 "method": "bdev_nvme_set_options", 00:33:06.063 "params": { 00:33:06.063 "action_on_timeout": "none", 00:33:06.063 "timeout_us": 0, 00:33:06.063 "timeout_admin_us": 0, 00:33:06.063 "keep_alive_timeout_ms": 10000, 00:33:06.063 "arbitration_burst": 0, 00:33:06.063 "low_priority_weight": 0, 00:33:06.063 "medium_priority_weight": 0, 00:33:06.063 "high_priority_weight": 0, 00:33:06.063 "nvme_adminq_poll_period_us": 10000, 00:33:06.063 "nvme_ioq_poll_period_us": 0, 00:33:06.063 "io_queue_requests": 512, 00:33:06.063 "delay_cmd_submit": true, 00:33:06.063 "transport_retry_count": 4, 00:33:06.063 "bdev_retry_count": 3, 00:33:06.063 "transport_ack_timeout": 0, 00:33:06.063 "ctrlr_loss_timeout_sec": 0, 00:33:06.063 "reconnect_delay_sec": 0, 00:33:06.063 "fast_io_fail_timeout_sec": 0, 00:33:06.063 "disable_auto_failback": false, 00:33:06.063 "generate_uuids": false, 00:33:06.063 "transport_tos": 0, 00:33:06.063 "nvme_error_stat": false, 00:33:06.063 "rdma_srq_size": 0, 00:33:06.063 "io_path_stat": false, 00:33:06.063 "allow_accel_sequence": false, 00:33:06.063 "rdma_max_cq_size": 0, 00:33:06.063 "rdma_cm_event_timeout_ms": 0, 00:33:06.063 "dhchap_digests": [ 00:33:06.063 "sha256", 00:33:06.063 "sha384", 00:33:06.063 "sha512" 00:33:06.063 ], 00:33:06.063 "dhchap_dhgroups": [ 00:33:06.063 "null", 00:33:06.063 "ffdhe2048", 00:33:06.063 "ffdhe3072", 00:33:06.063 "ffdhe4096", 00:33:06.063 "ffdhe6144", 00:33:06.063 "ffdhe8192" 00:33:06.063 ] 00:33:06.063 } 00:33:06.063 }, 00:33:06.063 { 00:33:06.063 "method": "bdev_nvme_attach_controller", 00:33:06.063 "params": { 00:33:06.063 "name": "nvme0", 00:33:06.063 "trtype": "TCP", 00:33:06.063 "adrfam": "IPv4", 00:33:06.063 "traddr": "127.0.0.1", 00:33:06.063 "trsvcid": "4420", 00:33:06.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:06.063 "prchk_reftag": false, 00:33:06.063 "prchk_guard": false, 00:33:06.063 "ctrlr_loss_timeout_sec": 0, 00:33:06.063 "reconnect_delay_sec": 0, 00:33:06.063 "fast_io_fail_timeout_sec": 0, 00:33:06.063 "psk": "key0", 00:33:06.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:06.063 "hdgst": false, 00:33:06.063 "ddgst": false 00:33:06.063 } 00:33:06.063 }, 00:33:06.063 { 00:33:06.063 "method": "bdev_nvme_set_hotplug", 00:33:06.063 "params": { 00:33:06.063 "period_us": 100000, 00:33:06.063 "enable": false 00:33:06.063 } 00:33:06.063 }, 00:33:06.063 { 00:33:06.063 "method": "bdev_wait_for_examine" 00:33:06.063 } 00:33:06.063 ] 00:33:06.063 }, 00:33:06.063 { 00:33:06.063 "subsystem": "nbd", 00:33:06.063 "config": [] 00:33:06.063 } 00:33:06.063 ] 00:33:06.063 }' 00:33:06.063 09:07:23 -- keyring/file.sh@114 -- # killprocess 2281148 00:33:06.063 09:07:23 -- common/autotest_common.sh@936 -- # '[' -z 2281148 ']' 00:33:06.063 09:07:23 -- common/autotest_common.sh@940 -- # kill -0 2281148 00:33:06.063 09:07:23 -- common/autotest_common.sh@941 -- # uname 00:33:06.063 09:07:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:06.063 09:07:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2281148 00:33:06.063 09:07:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:06.063 09:07:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:06.063 09:07:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2281148' 00:33:06.063 killing process with pid 2281148 00:33:06.063 09:07:23 -- common/autotest_common.sh@955 -- # kill 2281148 00:33:06.063 Received shutdown signal, test time was about 1.000000 seconds 00:33:06.063 00:33:06.063 Latency(us) 00:33:06.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.063 =================================================================================================================== 00:33:06.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:06.063 09:07:23 -- common/autotest_common.sh@960 -- # wait 2281148 00:33:06.322 09:07:23 -- keyring/file.sh@117 -- # bperfpid=2282770 00:33:06.322 09:07:23 -- keyring/file.sh@119 -- # waitforlisten 2282770 /var/tmp/bperf.sock 00:33:06.322 09:07:23 -- common/autotest_common.sh@817 -- # '[' -z 2282770 ']' 00:33:06.322 09:07:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:06.322 09:07:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:06.322 09:07:23 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:06.322 09:07:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:06.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:06.322 09:07:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:06.322 09:07:23 -- keyring/file.sh@115 -- # echo '{ 00:33:06.322 "subsystems": [ 00:33:06.322 { 00:33:06.322 "subsystem": "keyring", 00:33:06.322 "config": [ 00:33:06.322 { 00:33:06.322 "method": "keyring_file_add_key", 00:33:06.322 "params": { 00:33:06.322 "name": "key0", 00:33:06.322 "path": "/tmp/tmp.hgLQIJVZkQ" 00:33:06.322 } 00:33:06.322 }, 00:33:06.322 { 00:33:06.322 "method": "keyring_file_add_key", 00:33:06.322 "params": { 00:33:06.322 "name": "key1", 00:33:06.322 "path": "/tmp/tmp.op7YcX44gT" 00:33:06.322 } 00:33:06.322 } 00:33:06.322 ] 00:33:06.322 }, 00:33:06.322 { 00:33:06.322 "subsystem": "iobuf", 00:33:06.322 "config": [ 00:33:06.322 { 00:33:06.322 "method": "iobuf_set_options", 00:33:06.322 "params": { 00:33:06.322 "small_pool_count": 8192, 00:33:06.322 "large_pool_count": 1024, 00:33:06.322 "small_bufsize": 8192, 00:33:06.322 "large_bufsize": 135168 00:33:06.322 } 00:33:06.322 } 00:33:06.322 ] 00:33:06.322 }, 00:33:06.322 { 00:33:06.322 "subsystem": "sock", 00:33:06.322 "config": [ 00:33:06.322 { 00:33:06.322 "method": "sock_impl_set_options", 00:33:06.322 "params": { 00:33:06.322 "impl_name": "posix", 00:33:06.322 "recv_buf_size": 2097152, 00:33:06.322 "send_buf_size": 2097152, 00:33:06.322 "enable_recv_pipe": true, 00:33:06.322 "enable_quickack": false, 00:33:06.322 "enable_placement_id": 0, 00:33:06.322 "enable_zerocopy_send_server": true, 00:33:06.322 "enable_zerocopy_send_client": false, 00:33:06.322 "zerocopy_threshold": 0, 00:33:06.322 "tls_version": 0, 00:33:06.322 "enable_ktls": false 00:33:06.322 } 00:33:06.322 }, 00:33:06.322 { 00:33:06.322 "method": "sock_impl_set_options", 00:33:06.322 "params": { 00:33:06.322 "impl_name": "ssl", 00:33:06.322 "recv_buf_size": 4096, 00:33:06.322 "send_buf_size": 4096, 00:33:06.322 "enable_recv_pipe": true, 00:33:06.322 "enable_quickack": false, 00:33:06.322 "enable_placement_id": 0, 00:33:06.322 "enable_zerocopy_send_server": true, 00:33:06.322 "enable_zerocopy_send_client": false, 00:33:06.322 "zerocopy_threshold": 0, 00:33:06.322 "tls_version": 0, 00:33:06.322 "enable_ktls": false 00:33:06.322 } 00:33:06.322 } 00:33:06.322 ] 00:33:06.322 }, 00:33:06.322 { 00:33:06.322 "subsystem": "vmd", 00:33:06.322 "config": [] 00:33:06.322 }, 00:33:06.322 { 00:33:06.322 "subsystem": "accel", 00:33:06.322 "config": [ 00:33:06.322 { 00:33:06.322 "method": "accel_set_options", 00:33:06.322 "params": { 00:33:06.322 "small_cache_size": 128, 00:33:06.322 "large_cache_size": 16, 00:33:06.322 "task_count": 2048, 00:33:06.322 "sequence_count": 2048, 00:33:06.322 "buf_count": 2048 00:33:06.322 } 00:33:06.322 } 00:33:06.322 ] 00:33:06.322 }, 00:33:06.322 { 00:33:06.322 "subsystem": "bdev", 00:33:06.322 "config": [ 00:33:06.322 { 00:33:06.322 "method": "bdev_set_options", 00:33:06.322 "params": { 00:33:06.322 "bdev_io_pool_size": 65535, 00:33:06.322 "bdev_io_cache_size": 256, 00:33:06.322 "bdev_auto_examine": true, 00:33:06.322 "iobuf_small_cache_size": 128, 00:33:06.322 "iobuf_large_cache_size": 16 00:33:06.322 } 00:33:06.322 }, 00:33:06.322 { 00:33:06.323 "method": "bdev_raid_set_options", 00:33:06.323 "params": { 00:33:06.323 "process_window_size_kb": 1024 00:33:06.323 } 00:33:06.323 }, 00:33:06.323 { 00:33:06.323 "method": "bdev_iscsi_set_options", 00:33:06.323 "params": { 00:33:06.323 "timeout_sec": 30 00:33:06.323 } 00:33:06.323 }, 00:33:06.323 { 00:33:06.323 "method": "bdev_nvme_set_options", 00:33:06.323 "params": { 00:33:06.323 "action_on_timeout": "none", 00:33:06.323 "timeout_us": 0, 00:33:06.323 "timeout_admin_us": 0, 00:33:06.323 "keep_alive_timeout_ms": 10000, 00:33:06.323 "arbitration_burst": 0, 00:33:06.323 "low_priority_weight": 0, 00:33:06.323 "medium_priority_weight": 0, 00:33:06.323 "high_priority_weight": 0, 00:33:06.323 "nvme_adminq_poll_period_us": 10000, 00:33:06.323 "nvme_ioq_poll_period_us": 0, 00:33:06.323 "io_queue_requests": 512, 00:33:06.323 "delay_cmd_submit": true, 00:33:06.323 "transport_retry_count": 4, 00:33:06.323 "bdev_retry_count": 3, 00:33:06.323 "transport_ack_timeout": 0, 00:33:06.323 "ctrlr_loss_timeout_sec": 0, 00:33:06.323 "reconnect_delay_sec": 0, 00:33:06.323 "fast_io_fail_timeout_sec": 0, 00:33:06.323 "disable_auto_failback": false, 00:33:06.323 "generate_uuids": false, 00:33:06.323 "transport_tos": 0, 00:33:06.323 "nvme_error_stat": false, 00:33:06.323 "rdma_srq_size": 0, 00:33:06.323 "io_path_stat": false, 00:33:06.323 "allow_accel_sequence": false, 00:33:06.323 "rdma_max_cq_size": 0, 00:33:06.323 "rdma_cm_event_timeout_ms": 0, 00:33:06.323 "dhchap_digests": [ 00:33:06.323 "sha256", 00:33:06.323 "sha384", 00:33:06.323 "sha512" 00:33:06.323 ], 00:33:06.323 "dhchap_dhgroups": [ 00:33:06.323 "null", 00:33:06.323 "ffdhe2048", 00:33:06.323 "ffdhe3072", 00:33:06.323 "ffdhe4096", 00:33:06.323 "ffdhe6144", 00:33:06.323 "ffdhe8192" 00:33:06.323 ] 00:33:06.323 } 00:33:06.323 }, 00:33:06.323 { 00:33:06.323 "method": "bdev_nvme_attach_controller", 00:33:06.323 "params": { 00:33:06.323 "name": "nvme0", 00:33:06.323 "trtype": "TCP", 00:33:06.323 "adrfam": "IPv4", 00:33:06.323 "traddr": "127.0.0.1", 00:33:06.323 "trsvcid": "4420", 00:33:06.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:06.323 "prchk_reftag": false, 00:33:06.323 "prchk_guard": false, 00:33:06.323 "ctrlr_loss_timeout_sec": 0, 00:33:06.323 "reconnect_delay_sec": 0, 00:33:06.323 "fast_io_fail_timeout_sec": 0, 00:33:06.323 "psk": "key0", 00:33:06.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:06.323 "hdgst": false, 00:33:06.323 "ddgst": false 00:33:06.323 } 00:33:06.323 }, 00:33:06.323 { 00:33:06.323 "method": "bdev_nvme_set_hotplug", 00:33:06.323 "params": { 00:33:06.323 "period_us": 100000, 00:33:06.323 "enable": false 00:33:06.323 } 00:33:06.323 }, 00:33:06.323 { 00:33:06.323 "method": "bdev_wait_for_examine" 00:33:06.323 } 00:33:06.323 ] 00:33:06.323 }, 00:33:06.323 { 00:33:06.323 "subsystem": "nbd", 00:33:06.323 "config": [] 00:33:06.323 } 00:33:06.323 ] 00:33:06.323 }' 00:33:06.323 09:07:23 -- common/autotest_common.sh@10 -- # set +x 00:33:06.323 [2024-04-26 09:07:23.432828] Starting SPDK v24.05-pre git sha1 f8d98be2d / DPDK 23.11.0 initialization... 00:33:06.323 [2024-04-26 09:07:23.432880] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2282770 ] 00:33:06.323 EAL: No free 2048 kB hugepages reported on node 1 00:33:06.323 [2024-04-26 09:07:23.501748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.323 [2024-04-26 09:07:23.567810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:06.582 [2024-04-26 09:07:23.717321] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:07.148 09:07:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:07.148 09:07:24 -- common/autotest_common.sh@850 -- # return 0 00:33:07.148 09:07:24 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:07.148 09:07:24 -- keyring/file.sh@120 -- # jq length 00:33:07.148 09:07:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:07.406 09:07:24 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:07.406 09:07:24 -- keyring/file.sh@121 -- # get_refcnt key0 00:33:07.406 09:07:24 -- keyring/common.sh@12 -- # get_key key0 00:33:07.406 09:07:24 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:07.406 09:07:24 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:07.406 09:07:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:07.406 09:07:24 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:07.406 09:07:24 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:07.406 09:07:24 -- keyring/file.sh@122 -- # get_refcnt key1 00:33:07.406 09:07:24 -- keyring/common.sh@12 -- # get_key key1 00:33:07.406 09:07:24 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:07.406 09:07:24 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:07.406 09:07:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:07.406 09:07:24 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:07.664 09:07:24 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:07.664 09:07:24 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:07.664 09:07:24 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:07.664 09:07:24 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:07.932 09:07:24 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:07.932 09:07:24 -- keyring/file.sh@1 -- # cleanup 00:33:07.932 09:07:24 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.hgLQIJVZkQ /tmp/tmp.op7YcX44gT 00:33:07.932 09:07:24 -- keyring/file.sh@20 -- # killprocess 2282770 00:33:07.932 09:07:24 -- common/autotest_common.sh@936 -- # '[' -z 2282770 ']' 00:33:07.932 09:07:24 -- common/autotest_common.sh@940 -- # kill -0 2282770 00:33:07.932 09:07:24 -- common/autotest_common.sh@941 -- # uname 00:33:07.932 09:07:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:07.932 09:07:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2282770 00:33:07.932 09:07:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:07.932 09:07:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:07.932 09:07:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2282770' 00:33:07.932 killing process with pid 2282770 00:33:07.932 09:07:24 -- common/autotest_common.sh@955 -- # kill 2282770 00:33:07.932 Received shutdown signal, test time was about 1.000000 seconds 00:33:07.932 00:33:07.932 Latency(us) 00:33:07.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.932 =================================================================================================================== 00:33:07.932 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:07.932 09:07:24 -- common/autotest_common.sh@960 -- # wait 2282770 00:33:08.196 09:07:25 -- keyring/file.sh@21 -- # killprocess 2281034 00:33:08.196 09:07:25 -- common/autotest_common.sh@936 -- # '[' -z 2281034 ']' 00:33:08.196 09:07:25 -- common/autotest_common.sh@940 -- # kill -0 2281034 00:33:08.196 09:07:25 -- common/autotest_common.sh@941 -- # uname 00:33:08.196 09:07:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:08.196 09:07:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2281034 00:33:08.196 09:07:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:08.196 09:07:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:08.196 09:07:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2281034' 00:33:08.196 killing process with pid 2281034 00:33:08.196 09:07:25 -- common/autotest_common.sh@955 -- # kill 2281034 00:33:08.196 [2024-04-26 09:07:25.250072] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:08.196 09:07:25 -- common/autotest_common.sh@960 -- # wait 2281034 00:33:08.454 00:33:08.454 real 0m12.040s 00:33:08.454 user 0m27.857s 00:33:08.454 sys 0m3.225s 00:33:08.454 09:07:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:08.454 09:07:25 -- common/autotest_common.sh@10 -- # set +x 00:33:08.454 ************************************ 00:33:08.454 END TEST keyring_file 00:33:08.454 ************************************ 00:33:08.454 09:07:25 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:33:08.454 09:07:25 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:33:08.454 09:07:25 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:33:08.454 09:07:25 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:33:08.454 09:07:25 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:08.454 09:07:25 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:33:08.454 09:07:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:08.454 09:07:25 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:33:08.454 09:07:25 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:33:08.454 09:07:25 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:33:08.454 09:07:25 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:08.454 09:07:25 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:33:08.454 09:07:25 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:33:08.454 09:07:25 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:33:08.454 09:07:25 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:33:08.454 09:07:25 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:33:08.454 09:07:25 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:33:08.454 09:07:25 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:33:08.454 09:07:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:08.454 09:07:25 -- common/autotest_common.sh@10 -- # set +x 00:33:08.454 09:07:25 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:33:08.454 09:07:25 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:33:08.454 09:07:25 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:33:08.454 09:07:25 -- common/autotest_common.sh@10 -- # set +x 00:33:15.008 INFO: APP EXITING 00:33:15.008 INFO: killing all VMs 00:33:15.008 INFO: killing vhost app 00:33:15.008 INFO: EXIT DONE 00:33:18.286 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:33:18.286 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:33:18.286 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:33:18.286 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:33:18.286 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:33:18.286 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:33:18.286 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:33:18.286 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:33:18.286 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:33:18.286 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:33:18.286 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:33:18.544 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:33:18.544 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:33:18.544 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:33:18.544 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:33:18.544 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:33:18.544 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:33:21.817 Cleaning 00:33:21.817 Removing: /var/run/dpdk/spdk0/config 00:33:21.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:21.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:21.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:21.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:21.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:21.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:21.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:21.817 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:21.817 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:21.817 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:21.817 Removing: /var/run/dpdk/spdk1/config 00:33:21.817 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:21.817 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:21.817 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:21.817 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:21.817 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:21.817 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:21.817 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:21.817 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:21.817 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:21.817 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:21.817 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:21.817 Removing: /var/run/dpdk/spdk2/config 00:33:21.817 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:21.817 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:21.817 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:21.817 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:21.817 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:21.817 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:21.817 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:21.817 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:21.817 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:21.817 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:21.817 Removing: /var/run/dpdk/spdk3/config 00:33:21.817 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:21.817 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:21.817 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:21.817 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:21.817 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:21.817 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:21.817 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:21.817 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:21.817 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:21.817 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:21.817 Removing: /var/run/dpdk/spdk4/config 00:33:21.817 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:21.817 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:21.817 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:21.817 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:21.817 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:21.817 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:21.817 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:21.817 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:21.817 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:21.817 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:21.817 Removing: /dev/shm/bdev_svc_trace.1 00:33:21.817 Removing: /dev/shm/nvmf_trace.0 00:33:21.817 Removing: /dev/shm/spdk_tgt_trace.pid1906960 00:33:21.817 Removing: /var/run/dpdk/spdk0 00:33:21.817 Removing: /var/run/dpdk/spdk1 00:33:21.817 Removing: /var/run/dpdk/spdk2 00:33:21.817 Removing: /var/run/dpdk/spdk3 00:33:21.817 Removing: /var/run/dpdk/spdk4 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1904282 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1905554 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1906960 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1907800 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1908724 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1908943 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1910052 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1910308 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1910696 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1912306 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1913807 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1914202 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1914538 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1914889 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1915229 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1915519 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1915814 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1916138 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1917192 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1920251 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1920642 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1921040 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1921060 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1921688 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1921898 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1922466 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1922577 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1923027 00:33:21.817 Removing: /var/run/dpdk/spdk_pid1923052 00:33:21.818 Removing: /var/run/dpdk/spdk_pid1923353 00:33:21.818 Removing: /var/run/dpdk/spdk_pid1923552 00:33:21.818 Removing: /var/run/dpdk/spdk_pid1924017 00:33:21.818 Removing: /var/run/dpdk/spdk_pid1924306 00:33:21.818 Removing: /var/run/dpdk/spdk_pid1924639 00:33:21.818 Removing: /var/run/dpdk/spdk_pid1924965 00:33:21.818 Removing: /var/run/dpdk/spdk_pid1925240 00:33:21.818 Removing: /var/run/dpdk/spdk_pid1925336 00:33:21.818 Removing: /var/run/dpdk/spdk_pid1925630 00:33:21.818 Removing: /var/run/dpdk/spdk_pid1925931 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1926217 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1926502 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1926801 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1927088 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1927391 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1927696 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1927993 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1928293 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1928592 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1928882 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1929171 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1929453 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1929743 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1930035 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1930345 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1930668 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1930984 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1931293 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1931513 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1931874 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1935974 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1983273 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1987903 00:33:22.075 Removing: /var/run/dpdk/spdk_pid1997817 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2003567 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2008202 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2008935 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2021176 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2021256 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2022062 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2022896 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2023917 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2024453 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2024466 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2024730 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2024809 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2024928 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2025799 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2026618 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2027656 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2028194 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2028200 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2028470 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2029843 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2031150 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2040330 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2040611 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2045167 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2051295 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2054054 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2065087 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2074625 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2076362 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2077629 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2095627 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2099872 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2104699 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2106293 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2108268 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2108437 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2108708 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2108980 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2109560 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2111501 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2112561 00:33:22.075 Removing: /var/run/dpdk/spdk_pid2113135 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2115388 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2116128 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2116911 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2121256 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2132498 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2136710 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2143000 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2144493 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2146216 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2150839 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2155122 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2163394 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2163402 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2168225 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2168478 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2168738 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2169133 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2169195 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2174367 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2175027 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2179665 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2182582 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2188320 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2193942 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2201399 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2201438 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2220650 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2221210 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2222130 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2223119 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2224226 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2224783 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2225538 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2226136 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2230673 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2230946 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2237331 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2237643 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2239933 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2248243 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2248248 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2253810 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2255918 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2258061 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2259147 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2261115 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2262597 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2272300 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2272828 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2273362 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2275842 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2276369 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2276903 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2281034 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2281148 00:33:22.332 Removing: /var/run/dpdk/spdk_pid2282770 00:33:22.332 Clean 00:33:22.589 09:07:39 -- common/autotest_common.sh@1437 -- # return 0 00:33:22.589 09:07:39 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:33:22.589 09:07:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:22.590 09:07:39 -- common/autotest_common.sh@10 -- # set +x 00:33:22.590 09:07:39 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:33:22.590 09:07:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:22.590 09:07:39 -- common/autotest_common.sh@10 -- # set +x 00:33:22.590 09:07:39 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:22.848 09:07:39 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:22.848 09:07:39 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:22.848 09:07:39 -- spdk/autotest.sh@389 -- # hash lcov 00:33:22.848 09:07:39 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:22.848 09:07:39 -- spdk/autotest.sh@391 -- # hostname 00:33:22.848 09:07:39 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:22.848 geninfo: WARNING: invalid characters removed from testname! 00:33:44.777 09:07:59 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:45.035 09:08:02 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:46.936 09:08:03 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:48.311 09:08:05 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:50.211 09:08:07 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:51.586 09:08:08 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:53.484 09:08:10 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:53.484 09:08:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:53.484 09:08:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:53.484 09:08:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:53.484 09:08:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:53.484 09:08:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.484 09:08:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.484 09:08:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.484 09:08:10 -- paths/export.sh@5 -- $ export PATH 00:33:53.484 09:08:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.484 09:08:10 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:53.484 09:08:10 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:53.484 09:08:10 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714115290.XXXXXX 00:33:53.484 09:08:10 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714115290.NMGu4R 00:33:53.484 09:08:10 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:53.484 09:08:10 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:33:53.484 09:08:10 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:53.484 09:08:10 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:53.484 09:08:10 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:53.484 09:08:10 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:53.484 09:08:10 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:33:53.484 09:08:10 -- common/autotest_common.sh@10 -- $ set +x 00:33:53.484 09:08:10 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:53.484 09:08:10 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:33:53.484 09:08:10 -- pm/common@17 -- $ local monitor 00:33:53.484 09:08:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.484 09:08:10 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2295881 00:33:53.484 09:08:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.484 09:08:10 -- pm/common@21 -- $ date +%s 00:33:53.484 09:08:10 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2295883 00:33:53.484 09:08:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.484 09:08:10 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2295886 00:33:53.484 09:08:10 -- pm/common@21 -- $ date +%s 00:33:53.484 09:08:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:53.484 09:08:10 -- pm/common@21 -- $ date +%s 00:33:53.484 09:08:10 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2295888 00:33:53.484 09:08:10 -- pm/common@26 -- $ sleep 1 00:33:53.484 09:08:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714115290 00:33:53.484 09:08:10 -- pm/common@21 -- $ date +%s 00:33:53.484 09:08:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714115290 00:33:53.484 09:08:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714115290 00:33:53.484 09:08:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714115290 00:33:53.484 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714115290_collect-bmc-pm.bmc.pm.log 00:33:53.484 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714115290_collect-vmstat.pm.log 00:33:53.484 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714115290_collect-cpu-load.pm.log 00:33:53.484 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714115290_collect-cpu-temp.pm.log 00:33:54.439 09:08:11 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:33:54.439 09:08:11 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:33:54.439 09:08:11 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:54.439 09:08:11 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:54.439 09:08:11 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:54.439 09:08:11 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:54.439 09:08:11 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:54.439 09:08:11 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:54.439 09:08:11 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:54.439 09:08:11 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:54.439 09:08:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:54.439 09:08:11 -- pm/common@30 -- $ signal_monitor_resources TERM 00:33:54.439 09:08:11 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:33:54.439 09:08:11 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:54.439 09:08:11 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:54.439 09:08:11 -- pm/common@45 -- $ pid=2295898 00:33:54.439 09:08:11 -- pm/common@52 -- $ sudo kill -TERM 2295898 00:33:54.439 09:08:11 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:54.439 09:08:11 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:54.439 09:08:11 -- pm/common@45 -- $ pid=2295899 00:33:54.439 09:08:11 -- pm/common@52 -- $ sudo kill -TERM 2295899 00:33:54.439 09:08:11 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:54.439 09:08:11 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:54.439 09:08:11 -- pm/common@45 -- $ pid=2295897 00:33:54.439 09:08:11 -- pm/common@52 -- $ sudo kill -TERM 2295897 00:33:54.439 09:08:11 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:54.439 09:08:11 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:54.439 09:08:11 -- pm/common@45 -- $ pid=2295894 00:33:54.439 09:08:11 -- pm/common@52 -- $ sudo kill -TERM 2295894 00:33:54.698 + [[ -n 1794207 ]] 00:33:54.698 + sudo kill 1794207 00:33:54.710 [Pipeline] } 00:33:54.731 [Pipeline] // stage 00:33:54.737 [Pipeline] } 00:33:54.755 [Pipeline] // timeout 00:33:54.761 [Pipeline] } 00:33:54.779 [Pipeline] // catchError 00:33:54.784 [Pipeline] } 00:33:54.803 [Pipeline] // wrap 00:33:54.809 [Pipeline] } 00:33:54.828 [Pipeline] // catchError 00:33:54.838 [Pipeline] stage 00:33:54.840 [Pipeline] { (Epilogue) 00:33:54.855 [Pipeline] catchError 00:33:54.857 [Pipeline] { 00:33:54.872 [Pipeline] echo 00:33:54.874 Cleanup processes 00:33:54.880 [Pipeline] sh 00:33:55.166 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:55.166 2295980 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:55.166 2296348 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:55.180 [Pipeline] sh 00:33:55.460 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:55.460 ++ grep -v 'sudo pgrep' 00:33:55.460 ++ awk '{print $1}' 00:33:55.460 + sudo kill -9 2295980 00:33:55.472 [Pipeline] sh 00:33:55.751 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:55.751 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:33:59.943 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:34:04.168 [Pipeline] sh 00:34:04.447 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:04.447 Artifacts sizes are good 00:34:04.462 [Pipeline] archiveArtifacts 00:34:04.470 Archiving artifacts 00:34:04.600 [Pipeline] sh 00:34:04.887 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:04.902 [Pipeline] cleanWs 00:34:04.912 [WS-CLEANUP] Deleting project workspace... 00:34:04.912 [WS-CLEANUP] Deferred wipeout is used... 00:34:04.918 [WS-CLEANUP] done 00:34:04.920 [Pipeline] } 00:34:04.941 [Pipeline] // catchError 00:34:04.955 [Pipeline] sh 00:34:05.233 + logger -p user.info -t JENKINS-CI 00:34:05.241 [Pipeline] } 00:34:05.258 [Pipeline] // stage 00:34:05.264 [Pipeline] } 00:34:05.280 [Pipeline] // node 00:34:05.286 [Pipeline] End of Pipeline 00:34:05.326 Finished: SUCCESS